Test Report: Hyperkit_macOS 19423

                    
                      7f7446252791c927139509879c70af875912dc64:2024-08-18:35842
                    
                

Test fail (18/276)

x
+
TestOffline (195.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.956101005s)

                                                
                                                
-- stdout --
	* [offline-docker-476000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-476000" primary control-plane node in "offline-docker-476000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-476000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:33:52.230860    5591 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:33:52.231590    5591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:52.231599    5591 out.go:358] Setting ErrFile to fd 2...
	I0818 12:33:52.231606    5591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:33:52.232194    5591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:33:52.234022    5591 out.go:352] Setting JSON to false
	I0818 12:33:52.260255    5591 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3803,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:33:52.260365    5591 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:33:52.323880    5591 out.go:177] * [offline-docker-476000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:33:52.365264    5591 notify.go:220] Checking for updates...
	I0818 12:33:52.392079    5591 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:33:52.430325    5591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:33:52.450992    5591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:33:52.482239    5591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:33:52.503148    5591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:33:52.524165    5591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:33:52.545445    5591 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:33:52.574044    5591 out.go:177] * Using the hyperkit driver based on user configuration
	I0818 12:33:52.616402    5591 start.go:297] selected driver: hyperkit
	I0818 12:33:52.616432    5591 start.go:901] validating driver "hyperkit" against <nil>
	I0818 12:33:52.616454    5591 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:33:52.620700    5591 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:33:52.620813    5591 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:33:52.629159    5591 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:33:52.632921    5591 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:33:52.632955    5591 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:33:52.632992    5591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:33:52.633201    5591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:33:52.633263    5591 cni.go:84] Creating CNI manager for ""
	I0818 12:33:52.633278    5591 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:33:52.633284    5591 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:33:52.633349    5591 start.go:340] cluster config:
	{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:33:52.633427    5591 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:33:52.703328    5591 out.go:177] * Starting "offline-docker-476000" primary control-plane node in "offline-docker-476000" cluster
	I0818 12:33:52.724308    5591 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:33:52.724405    5591 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:33:52.724458    5591 cache.go:56] Caching tarball of preloaded images
	I0818 12:33:52.724714    5591 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:33:52.724735    5591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:33:52.725252    5591 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/offline-docker-476000/config.json ...
	I0818 12:33:52.725300    5591 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/offline-docker-476000/config.json: {Name:mk0c7e75418048ef1afb6976220c766605726d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:33:52.726204    5591 start.go:360] acquireMachinesLock for offline-docker-476000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:33:52.726308    5591 start.go:364] duration metric: took 74.544µs to acquireMachinesLock for "offline-docker-476000"
	I0818 12:33:52.726341    5591 start.go:93] Provisioning new machine with config: &{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:33:52.726436    5591 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:33:52.768051    5591 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:33:52.768212    5591 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:33:52.768253    5591 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:33:52.777178    5591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53595
	I0818 12:33:52.777551    5591 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:33:52.777958    5591 main.go:141] libmachine: Using API Version  1
	I0818 12:33:52.777978    5591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:33:52.778218    5591 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:33:52.778349    5591 main.go:141] libmachine: (offline-docker-476000) Calling .GetMachineName
	I0818 12:33:52.778444    5591 main.go:141] libmachine: (offline-docker-476000) Calling .DriverName
	I0818 12:33:52.778554    5591 start.go:159] libmachine.API.Create for "offline-docker-476000" (driver="hyperkit")
	I0818 12:33:52.778582    5591 client.go:168] LocalClient.Create starting
	I0818 12:33:52.778616    5591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:33:52.778667    5591 main.go:141] libmachine: Decoding PEM data...
	I0818 12:33:52.778682    5591 main.go:141] libmachine: Parsing certificate...
	I0818 12:33:52.778757    5591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:33:52.778796    5591 main.go:141] libmachine: Decoding PEM data...
	I0818 12:33:52.778809    5591 main.go:141] libmachine: Parsing certificate...
	I0818 12:33:52.778833    5591 main.go:141] libmachine: Running pre-create checks...
	I0818 12:33:52.778839    5591 main.go:141] libmachine: (offline-docker-476000) Calling .PreCreateCheck
	I0818 12:33:52.778916    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:52.779127    5591 main.go:141] libmachine: (offline-docker-476000) Calling .GetConfigRaw
	I0818 12:33:52.779611    5591 main.go:141] libmachine: Creating machine...
	I0818 12:33:52.779622    5591 main.go:141] libmachine: (offline-docker-476000) Calling .Create
	I0818 12:33:52.779703    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:52.779822    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:33:52.779696    5612 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:33:52.779884    5591 main.go:141] libmachine: (offline-docker-476000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:33:53.325203    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:33:53.325107    5612 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/id_rsa...
	I0818 12:33:53.497826    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:33:53.497738    5612 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk...
	I0818 12:33:53.497839    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Writing magic tar header
	I0818 12:33:53.497862    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Writing SSH key tar header
	I0818 12:33:53.498154    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:33:53.498116    5612 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000 ...
	I0818 12:33:53.959219    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:53.959238    5591 main.go:141] libmachine: (offline-docker-476000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid
	I0818 12:33:53.959248    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Using UUID dc6a164e-5a06-408e-93b4-9efe128c4efa
	I0818 12:33:54.227884    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Generated MAC 2e:1:33:3c:36:b0
	I0818 12:33:54.227913    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000
	I0818 12:33:54.227965    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"dc6a164e-5a06-408e-93b4-9efe128c4efa", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0818 12:33:54.228009    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"dc6a164e-5a06-408e-93b4-9efe128c4efa", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0818 12:33:54.228081    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "dc6a164e-5a06-408e-93b4-9efe128c4efa", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage,
/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000"}
	I0818 12:33:54.228155    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U dc6a164e-5a06-408e-93b4-9efe128c4efa -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machi
nes/offline-docker-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000"
	I0818 12:33:54.228186    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:33:54.231701    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 DEBUG: hyperkit: Pid is 5638
	I0818 12:33:54.232444    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 0
	I0818 12:33:54.232458    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:54.232523    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:33:54.233412    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:33:54.233496    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:33:54.233509    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:33:54.233527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:33:54.233538    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:33:54.233550    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:33:54.233561    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:33:54.233575    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:33:54.233589    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:33:54.233601    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:33:54.233614    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:33:54.233637    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:33:54.233657    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:33:54.233665    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:33:54.233670    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:33:54.233746    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:33:54.233778    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:33:54.233794    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:33:54.233808    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:33:54.239280    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:33:54.293182    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:33:54.312033    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:33:54.312055    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:33:54.312066    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:33:54.312080    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:33:54.686833    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:33:54.686860    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:33:54.801850    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:33:54.801866    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:33:54.801879    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:33:54.801885    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:33:54.802734    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:33:54.802744    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:33:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:33:56.234298    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 1
	I0818 12:33:56.234310    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:56.234403    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:33:56.235170    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:33:56.235227    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:33:56.235238    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:33:56.235257    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:33:56.235291    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:33:56.235305    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:33:56.235335    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:33:56.235345    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:33:56.235353    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:33:56.235375    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:33:56.235385    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:33:56.235394    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:33:56.235407    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:33:56.235419    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:33:56.235427    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:33:56.235436    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:33:56.235461    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:33:56.235476    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:33:56.235492    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:33:58.235926    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 2
	I0818 12:33:58.235940    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:33:58.235984    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:33:58.236810    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:33:58.236866    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:33:58.236883    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:33:58.236891    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:33:58.236899    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:33:58.236906    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:33:58.236912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:33:58.236920    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:33:58.236927    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:33:58.236932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:33:58.236938    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:33:58.236945    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:33:58.236956    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:33:58.236965    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:33:58.236972    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:33:58.236979    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:33:58.236987    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:33:58.236994    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:33:58.237010    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:00.174174    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:34:00.174338    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:34:00.174347    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:34:00.194123    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:34:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:34:00.238109    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 3
	I0818 12:34:00.238138    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:00.238296    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:00.239721    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:00.239931    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:00.239962    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:00.239989    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:00.240004    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:00.240040    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:00.240073    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:00.240087    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:00.240097    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:00.240107    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:00.240117    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:00.240131    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:00.240141    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:00.240151    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:00.240180    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:00.240192    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:00.240207    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:00.240243    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:00.240261    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:02.240499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 4
	I0818 12:34:02.240512    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:02.240615    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:02.241365    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:02.241427    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:02.241443    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:02.241455    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:02.241462    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:02.241470    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:02.241484    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:02.241499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:02.241519    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:02.241534    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:02.241552    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:02.241561    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:02.241570    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:02.241577    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:02.241583    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:02.241592    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:02.241600    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:02.241607    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:02.241615    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:04.243051    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 5
	I0818 12:34:04.243065    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:04.243134    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:04.243940    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:04.243997    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:04.244006    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:04.244031    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:04.244042    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:04.244051    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:04.244057    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:04.244074    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:04.244082    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:04.244090    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:04.244100    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:04.244107    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:04.244115    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:04.244137    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:04.244150    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:04.244167    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:04.244182    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:04.244191    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:04.244199    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:06.244229    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 6
	I0818 12:34:06.244242    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:06.244324    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:06.245368    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:06.245420    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:06.245431    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:06.245444    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:06.245451    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:06.245462    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:06.245468    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:06.245476    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:06.245483    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:06.245490    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:06.245498    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:06.245505    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:06.245512    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:06.245520    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:06.245527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:06.245534    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:06.245543    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:06.245549    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:06.245565    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:08.246199    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 7
	I0818 12:34:08.246217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:08.246269    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:08.247092    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:08.247173    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:08.247185    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:08.247206    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:08.247217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:08.247229    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:08.247238    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:08.247255    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:08.247269    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:08.247277    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:08.247285    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:08.247293    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:08.247302    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:08.247320    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:08.247329    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:08.247340    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:08.247349    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:08.247357    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:08.247366    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:10.247487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 8
	I0818 12:34:10.247508    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:10.247550    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:10.248353    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:10.248390    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:10.248401    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:10.248411    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:10.248422    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:10.248430    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:10.248439    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:10.248473    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:10.248487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:10.248494    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:10.248501    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:10.248507    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:10.248513    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:10.248530    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:10.248544    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:10.248552    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:10.248560    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:10.248567    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:10.248575    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:12.248504    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 9
	I0818 12:34:12.248519    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:12.248612    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:12.249340    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:12.249416    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:12.249426    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:12.249473    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:12.249488    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:12.249499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:12.249507    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:12.249513    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:12.249519    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:12.249529    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:12.249540    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:12.249546    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:12.249555    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:12.249569    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:12.249581    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:12.249591    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:12.249599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:12.249614    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:12.249633    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:14.251089    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 10
	I0818 12:34:14.251104    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:14.251176    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:14.252002    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:14.252059    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:14.252070    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:14.252077    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:14.252085    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:14.252092    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:14.252098    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:14.252105    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:14.252111    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:14.252119    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:14.252127    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:14.252133    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:14.252142    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:14.252153    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:14.252164    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:14.252170    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:14.252178    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:14.252186    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:14.252193    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:16.252659    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 11
	I0818 12:34:16.252684    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:16.252714    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:16.253503    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:16.253545    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:16.253576    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:16.253595    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:16.253604    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:16.253632    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:16.253648    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:16.253655    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:16.253669    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:16.253683    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:16.253692    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:16.253700    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:16.253707    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:16.253715    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:16.253722    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:16.253734    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:16.253743    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:16.253756    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:16.253769    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:18.254023    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 12
	I0818 12:34:18.254045    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:18.254114    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:18.254923    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:18.254968    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:18.254980    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:18.254988    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:18.254994    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:18.255016    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:18.255028    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:18.255038    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:18.255048    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:18.255057    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:18.255065    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:18.255078    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:18.255089    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:18.255106    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:18.255118    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:18.255126    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:18.255132    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:18.255139    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:18.255147    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:20.257113    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 13
	I0818 12:34:20.257126    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:20.257194    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:20.257992    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:20.258045    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:20.258058    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:20.258068    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:20.258082    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:20.258089    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:20.258096    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:20.258103    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:20.258109    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:20.258122    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:20.258137    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:20.258152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:20.258164    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:20.258174    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:20.258183    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:20.258191    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:20.258211    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:20.258219    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:20.258228    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:22.259511    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 14
	I0818 12:34:22.259527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:22.259615    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:22.260442    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:22.260488    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:22.260499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:22.260509    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:22.260518    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:22.260526    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:22.260532    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:22.260538    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:22.260545    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:22.260564    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:22.260572    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:22.260591    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:22.260602    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:22.260611    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:22.260617    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:22.260625    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:22.260632    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:22.260640    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:22.260645    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:24.260596    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 15
	I0818 12:34:24.260608    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:24.260646    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:24.261525    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:24.261573    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:24.261590    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:24.261601    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:24.261611    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:24.261636    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:24.261646    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:24.261653    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:24.261659    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:24.261672    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:24.261681    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:24.261691    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:24.261707    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:24.261714    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:24.261721    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:24.261732    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:24.261750    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:24.261762    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:24.261773    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:26.262016    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 16
	I0818 12:34:26.262028    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:26.262084    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:26.263102    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:26.263129    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:26.263141    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:26.263175    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:26.263189    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:26.263196    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:26.263203    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:26.263225    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:26.263237    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:26.263246    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:26.263263    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:26.263272    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:26.263279    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:26.263286    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:26.263304    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:26.263317    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:26.263326    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:26.263334    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:26.263342    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:28.265303    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 17
	I0818 12:34:28.265317    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:28.265399    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:28.266156    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:28.266211    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:28.266222    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:28.266229    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:28.266235    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:28.266256    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:28.266265    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:28.266294    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:28.266306    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:28.266315    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:28.266324    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:28.266333    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:28.266349    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:28.266370    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:28.266382    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:28.266390    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:28.266399    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:28.266405    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:28.266412    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:30.267700    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 18
	I0818 12:34:30.267714    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:30.267788    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:30.268544    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:30.268593    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:30.268603    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:30.268613    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:30.268621    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:30.268628    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:30.268635    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:30.268661    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:30.268675    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:30.268687    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:30.268695    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:30.268701    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:30.268709    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:30.268718    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:30.268725    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:30.268732    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:30.268738    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:30.268744    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:30.268750    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:32.269929    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 19
	I0818 12:34:32.269943    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:32.270053    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:32.270825    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:32.270875    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:32.270895    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:32.270903    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:32.270912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:32.270918    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:32.270925    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:32.270933    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:32.270940    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:32.270946    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:32.270957    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:32.270971    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:32.270979    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:32.270987    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:32.270995    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:32.271001    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:32.271007    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:32.271016    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:32.271025    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:34.272283    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 20
	I0818 12:34:34.272298    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:34.272373    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:34.273112    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:34.273167    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:34.273185    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:34.273194    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:34.273204    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:34.273211    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:34.273218    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:34.273225    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:34.273233    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:34.273247    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:34.273260    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:34.273270    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:34.273278    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:34.273292    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:34.273301    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:34.273308    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:34.273316    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:34.273322    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:34.273330    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:36.274556    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 21
	I0818 12:34:36.274582    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:36.274629    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:36.275449    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:36.275487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:36.275500    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:36.275510    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:36.275519    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:36.275530    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:36.275541    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:36.275549    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:36.275557    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:36.275564    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:36.275572    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:36.275579    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:36.275587    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:36.275602    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:36.275613    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:36.275621    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:36.275632    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:36.275640    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:36.275646    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:38.276145    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 22
	I0818 12:34:38.276159    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:38.276266    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:38.277054    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:38.277091    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:38.277099    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:38.277116    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:38.277124    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:38.277145    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:38.277152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:38.277171    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:38.277181    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:38.277193    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:38.277209    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:38.277220    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:38.277233    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:38.277244    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:38.277260    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:38.277269    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:38.277283    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:38.277293    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:38.277306    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:40.277200    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 23
	I0818 12:34:40.277216    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:40.277288    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:40.278066    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:40.278110    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:40.278119    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:40.278130    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:40.278137    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:40.278143    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:40.278152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:40.278173    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:40.278187    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:40.278195    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:40.278204    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:40.278217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:40.278229    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:40.278239    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:40.278248    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:40.278255    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:40.278262    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:40.278268    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:40.278277    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:42.278470    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 24
	I0818 12:34:42.278484    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:42.278598    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:42.279376    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:42.279415    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:42.279426    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:42.279436    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:42.279443    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:42.279450    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:42.279456    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:42.279463    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:42.279469    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:42.279477    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:42.279486    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:42.279501    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:42.279515    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:42.279528    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:42.279536    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:42.279543    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:42.279552    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:42.279568    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:42.279576    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:44.281561    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 25
	I0818 12:34:44.281578    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:44.281638    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:44.282402    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:44.282453    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:44.282462    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:44.282470    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:44.282494    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:44.282511    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:44.282520    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:44.282527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:44.282540    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:44.282550    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:44.282558    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:44.282576    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:44.282586    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:44.282599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:44.282607    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:44.282614    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:44.282623    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:44.282637    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:44.282645    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:46.284578    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 26
	I0818 12:34:46.284604    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:46.284641    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:46.285716    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:46.285756    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:46.285763    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:46.285773    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:46.285780    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:46.285788    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:46.285794    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:46.285800    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:46.285814    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:46.285828    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:46.285840    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:46.285849    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:46.285857    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:46.285864    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:46.285872    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:46.285878    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:46.285885    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:46.285892    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:46.285899    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:48.286164    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 27
	I0818 12:34:48.286180    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:48.286236    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:48.287050    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:48.287097    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:48.287109    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:48.287117    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:48.287123    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:48.287130    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:48.287137    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:48.287144    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:48.287160    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:48.287171    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:48.287194    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:48.287207    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:48.287217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:48.287224    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:48.287230    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:48.287239    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:48.287248    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:48.287257    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:48.287270    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:50.289258    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 28
	I0818 12:34:50.289273    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:50.289332    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:50.290152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:50.290206    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:50.290218    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:50.290237    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:50.290248    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:50.290270    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:50.290279    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:50.290286    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:50.290294    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:50.290305    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:50.290314    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:50.290321    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:50.290329    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:50.290336    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:50.290345    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:50.290352    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:50.290361    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:50.290370    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:50.290379    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:52.291602    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 29
	I0818 12:34:52.291617    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:52.291700    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:52.292505    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for 2e:1:33:3c:36:b0 in /var/db/dhcpd_leases ...
	I0818 12:34:52.292552    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:52.292564    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:52.292572    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:52.292586    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:52.292593    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:52.292599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:52.292607    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:52.292621    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:52.292629    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:52.292636    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:52.292642    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:52.292656    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:52.292667    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:52.292691    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:52.292702    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:52.292710    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:52.292718    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:52.292726    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:54.293657    5591 client.go:171] duration metric: took 1m1.51655153s to LocalClient.Create
	I0818 12:34:56.295723    5591 start.go:128] duration metric: took 1m3.570798992s to createHost
	I0818 12:34:56.295736    5591 start.go:83] releasing machines lock for "offline-docker-476000", held for 1m3.570955387s
	W0818 12:34:56.295753    5591 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:1:33:3c:36:b0
	I0818 12:34:56.296191    5591 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:34:56.296227    5591 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:34:56.305619    5591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53631
	I0818 12:34:56.305973    5591 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:34:56.306407    5591 main.go:141] libmachine: Using API Version  1
	I0818 12:34:56.306416    5591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:34:56.306706    5591 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:34:56.307066    5591 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:34:56.307121    5591 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:34:56.315974    5591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53633
	I0818 12:34:56.316455    5591 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:34:56.316938    5591 main.go:141] libmachine: Using API Version  1
	I0818 12:34:56.316952    5591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:34:56.317240    5591 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:34:56.317373    5591 main.go:141] libmachine: (offline-docker-476000) Calling .GetState
	I0818 12:34:56.317462    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.317538    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:56.318516    5591 main.go:141] libmachine: (offline-docker-476000) Calling .DriverName
	I0818 12:34:56.339092    5591 out.go:177] * Deleting "offline-docker-476000" in hyperkit ...
	I0818 12:34:56.360408    5591 main.go:141] libmachine: (offline-docker-476000) Calling .Remove
	I0818 12:34:56.360551    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.360562    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.360616    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:56.361540    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.361598    5591 main.go:141] libmachine: (offline-docker-476000) DBG | waiting for graceful shutdown
	I0818 12:34:57.363684    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:57.363747    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:57.364657    5591 main.go:141] libmachine: (offline-docker-476000) DBG | waiting for graceful shutdown
	I0818 12:34:58.365036    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:58.365135    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:58.366782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | waiting for graceful shutdown
	I0818 12:34:59.368842    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:59.368940    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:34:59.369531    5591 main.go:141] libmachine: (offline-docker-476000) DBG | waiting for graceful shutdown
	I0818 12:35:00.370093    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:00.370158    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:35:00.370731    5591 main.go:141] libmachine: (offline-docker-476000) DBG | waiting for graceful shutdown
	I0818 12:35:01.370837    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:01.370928    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5638
	I0818 12:35:01.372003    5591 main.go:141] libmachine: (offline-docker-476000) DBG | sending sigkill
	I0818 12:35:01.372014    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0818 12:35:01.386015    5591 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:1:33:3c:36:b0
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2e:1:33:3c:36:b0
	I0818 12:35:01.386035    5591 start.go:729] Will try again in 5 seconds ...
	I0818 12:35:01.394582    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:01 WARN : hyperkit: failed to read stderr: EOF
	I0818 12:35:01.394599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:01 WARN : hyperkit: failed to read stdout: EOF
	I0818 12:35:06.387966    5591 start.go:360] acquireMachinesLock for offline-docker-476000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:35:59.173078    5591 start.go:364] duration metric: took 52.786360222s to acquireMachinesLock for "offline-docker-476000"
	I0818 12:35:59.173104    5591 start.go:93] Provisioning new machine with config: &{Name:offline-docker-476000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-476000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:35:59.173157    5591 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:35:59.194337    5591 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:35:59.194409    5591 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:35:59.194434    5591 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:35:59.202974    5591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53641
	I0818 12:35:59.203300    5591 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:35:59.203725    5591 main.go:141] libmachine: Using API Version  1
	I0818 12:35:59.203748    5591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:35:59.204005    5591 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:35:59.204111    5591 main.go:141] libmachine: (offline-docker-476000) Calling .GetMachineName
	I0818 12:35:59.204211    5591 main.go:141] libmachine: (offline-docker-476000) Calling .DriverName
	I0818 12:35:59.204334    5591 start.go:159] libmachine.API.Create for "offline-docker-476000" (driver="hyperkit")
	I0818 12:35:59.204352    5591 client.go:168] LocalClient.Create starting
	I0818 12:35:59.204379    5591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:35:59.204432    5591 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:59.204452    5591 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:59.204494    5591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:35:59.204533    5591 main.go:141] libmachine: Decoding PEM data...
	I0818 12:35:59.204544    5591 main.go:141] libmachine: Parsing certificate...
	I0818 12:35:59.204556    5591 main.go:141] libmachine: Running pre-create checks...
	I0818 12:35:59.204562    5591 main.go:141] libmachine: (offline-docker-476000) Calling .PreCreateCheck
	I0818 12:35:59.204637    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.204669    5591 main.go:141] libmachine: (offline-docker-476000) Calling .GetConfigRaw
	I0818 12:35:59.236234    5591 main.go:141] libmachine: Creating machine...
	I0818 12:35:59.236256    5591 main.go:141] libmachine: (offline-docker-476000) Calling .Create
	I0818 12:35:59.236344    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.236503    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:35:59.236364    5798 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:35:59.236590    5591 main.go:141] libmachine: (offline-docker-476000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:35:59.443148    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:35:59.443070    5798 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/id_rsa...
	I0818 12:35:59.484876    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:35:59.484811    5798 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk...
	I0818 12:35:59.484885    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Writing magic tar header
	I0818 12:35:59.484897    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Writing SSH key tar header
	I0818 12:35:59.485295    5591 main.go:141] libmachine: (offline-docker-476000) DBG | I0818 12:35:59.485256    5798 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000 ...
	I0818 12:35:59.859912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.859932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid
	I0818 12:35:59.859960    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Using UUID c23db479-aa85-4f91-9a43-a65666bade54
	I0818 12:35:59.886066    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Generated MAC b6:78:b7:43:37:89
	I0818 12:35:59.886087    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000
	I0818 12:35:59.886124    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c23db479-aa85-4f91-9a43-a65666bade54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000118540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0818 12:35:59.886152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c23db479-aa85-4f91-9a43-a65666bade54", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000118540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0818 12:35:59.886198    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c23db479-aa85-4f91-9a43-a65666bade54", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage,
/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000"}
	I0818 12:35:59.886244    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c23db479-aa85-4f91-9a43-a65666bade54 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/offline-docker-476000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machi
nes/offline-docker-476000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-476000"
	I0818 12:35:59.886252    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:35:59.889280    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 DEBUG: hyperkit: Pid is 5799
	I0818 12:35:59.890352    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 0
	I0818 12:35:59.890364    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.890412    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:35:59.891399    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:35:59.891423    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:59.891438    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:59.891457    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:59.891478    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:59.891489    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:59.891504    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:59.891533    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:59.891549    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:59.891558    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:59.891567    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:59.891577    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:59.891645    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:59.891681    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:59.891692    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:59.891703    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:59.891734    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:59.891761    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:59.891782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:59.897217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:35:59.905411    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/offline-docker-476000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:35:59.906396    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:35:59.906414    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:35:59.906446    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:35:59.906461    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:35:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:36:00.283136    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:36:00.283147    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:36:00.397703    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:36:00.397718    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:36:00.397729    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:36:00.397738    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:36:00.398611    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:36:00.398625    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:00 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:36:01.892420    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 1
	I0818 12:36:01.892437    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:01.892532    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:01.893323    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:01.893379    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:01.893390    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:01.893407    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:01.893416    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:01.893427    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:01.893439    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:01.893450    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:01.893458    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:01.893466    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:01.893473    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:01.893486    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:01.893492    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:01.893500    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:01.893507    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:01.893514    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:01.893520    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:01.893527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:01.893537    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:03.895474    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 2
	I0818 12:36:03.895488    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:03.895566    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:03.896374    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:03.896421    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:03.896436    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:03.896449    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:03.896467    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:03.896477    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:03.896485    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:03.896493    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:03.896499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:03.896517    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:03.896530    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:03.896540    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:03.896549    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:03.896556    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:03.896568    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:03.896575    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:03.896588    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:03.896599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:03.896609    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:05.790351    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:36:05.790513    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:36:05.790525    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:36:05.811551    5591 main.go:141] libmachine: (offline-docker-476000) DBG | 2024/08/18 12:36:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:36:05.897730    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 3
	I0818 12:36:05.897755    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:05.897975    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:05.899413    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:05.899509    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:05.899525    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:05.899541    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:05.899558    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:05.899581    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:05.899599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:05.899634    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:05.899663    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:05.899683    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:05.899697    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:05.899734    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:05.899747    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:05.899757    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:05.899768    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:05.899778    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:05.899789    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:05.899798    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:05.899807    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:07.900575    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 4
	I0818 12:36:07.900589    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:07.900687    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:07.901445    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:07.901498    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:07.901509    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:07.901523    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:07.901531    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:07.901542    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:07.901548    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:07.901557    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:07.901565    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:07.901573    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:07.901581    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:07.901588    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:07.901594    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:07.901609    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:07.901623    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:07.901644    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:07.901654    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:07.901673    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:07.901685    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:09.903700    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 5
	I0818 12:36:09.903715    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:09.903749    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:09.904728    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:09.904758    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:09.904766    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:09.904774    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:09.904782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:09.904793    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:09.904799    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:09.904806    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:09.904812    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:09.904818    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:09.904827    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:09.904835    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:09.904844    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:09.904852    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:09.904860    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:09.904866    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:09.904874    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:09.904881    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:09.904888    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:11.905452    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 6
	I0818 12:36:11.905464    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:11.905537    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:11.906357    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:11.906379    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:11.906393    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:11.906402    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:11.906411    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:11.906420    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:11.906426    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:11.906433    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:11.906440    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:11.906446    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:11.906453    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:11.906461    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:11.906468    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:11.906476    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:11.906484    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:11.906493    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:11.906508    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:11.906527    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:11.906537    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:13.907475    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 7
	I0818 12:36:13.907490    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:13.907559    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:13.908561    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:13.908593    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:13.908601    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:13.908612    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:13.908619    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:13.908626    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:13.908634    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:13.908642    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:13.908648    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:13.908656    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:13.908664    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:13.908681    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:13.908689    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:13.908697    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:13.908705    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:13.908713    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:13.908735    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:13.908747    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:13.908755    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:15.909081    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 8
	I0818 12:36:15.909096    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:15.909152    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:15.910131    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:15.910166    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:15.910176    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:15.910193    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:15.910205    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:15.910220    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:15.910227    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:15.910235    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:15.910243    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:15.910252    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:15.910260    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:15.910267    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:15.910273    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:15.910278    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:15.910289    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:15.910301    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:15.910318    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:15.910332    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:15.910341    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:17.911484    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 9
	I0818 12:36:17.911499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:17.911606    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:17.912432    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:17.912460    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:17.912467    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:17.912478    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:17.912487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:17.912495    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:17.912504    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:17.912511    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:17.912517    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:17.912525    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:17.912533    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:17.912541    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:17.912548    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:17.912555    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:17.912563    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:17.912581    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:17.912593    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:17.912601    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:17.912609    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:19.913290    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 10
	I0818 12:36:19.913315    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:19.913354    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:19.914454    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:19.914486    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:19.914500    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:19.914511    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:19.914518    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:19.914525    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:19.914533    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:19.914549    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:19.914568    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:19.914582    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:19.914594    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:19.914647    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:19.914657    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:19.914665    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:19.914675    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:19.914683    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:19.914691    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:19.914703    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:19.914714    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:21.915125    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 11
	I0818 12:36:21.915141    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:21.915218    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:21.916041    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:21.916089    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:21.916101    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:21.916108    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:21.916115    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:21.916148    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:21.916161    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:21.916173    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:21.916182    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:21.916189    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:21.916195    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:21.916201    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:21.916208    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:21.916214    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:21.916222    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:21.916229    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:21.916237    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:21.916244    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:21.916250    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:23.916688    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 12
	I0818 12:36:23.916701    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:23.916784    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:23.917637    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:23.917688    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:23.917699    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:23.917707    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:23.917716    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:23.917723    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:23.917729    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:23.917741    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:23.917747    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:23.917755    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:23.917762    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:23.917771    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:23.917782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:23.917790    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:23.917798    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:23.917805    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:23.917813    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:23.917820    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:23.917827    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:25.918900    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 13
	I0818 12:36:25.918912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:25.918986    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:25.919773    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:25.919819    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:25.919829    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:25.919839    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:25.919849    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:25.919857    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:25.919868    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:25.919879    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:25.919889    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:25.919902    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:25.919910    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:25.919916    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:25.919925    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:25.919932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:25.919941    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:25.919948    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:25.919956    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:25.919963    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:25.919970    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:27.921954    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 14
	I0818 12:36:27.921969    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:27.922019    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:27.922791    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:27.922836    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:27.922846    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:27.922854    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:27.922868    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:27.922884    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:27.922897    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:27.922906    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:27.922912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:27.922919    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:27.922927    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:27.922934    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:27.922942    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:27.922952    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:27.922958    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:27.922965    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:27.922979    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:27.922991    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:27.923001    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:29.923106    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 15
	I0818 12:36:29.923121    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:29.923161    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:29.924086    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:29.924143    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:29.924164    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:29.924174    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:29.924184    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:29.924197    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:29.924215    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:29.924227    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:29.924234    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:29.924243    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:29.924256    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:29.924267    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:29.924275    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:29.924283    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:29.924297    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:29.924312    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:29.924321    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:29.924332    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:29.924342    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:31.925483    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 16
	I0818 12:36:31.925498    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:31.925560    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:31.926356    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:31.926391    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:31.926401    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:31.926412    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:31.926419    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:31.926427    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:31.926433    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:31.926449    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:31.926459    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:31.926478    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:31.926487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:31.926495    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:31.926515    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:31.926529    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:31.926537    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:31.926544    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:31.926552    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:31.926564    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:31.926571    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:33.926712    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 17
	I0818 12:36:33.926729    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:33.926785    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:33.927744    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:33.927797    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:33.927807    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:33.927820    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:33.927831    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:33.927839    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:33.927845    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:33.927852    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:33.927873    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:33.927892    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:33.927901    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:33.927908    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:33.927916    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:33.927924    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:33.927932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:33.927946    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:33.927955    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:33.927963    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:33.927971    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:35.929051    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 18
	I0818 12:36:35.929064    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:35.929101    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:35.929881    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:35.929922    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:35.929932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:35.929943    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:35.929949    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:35.929956    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:35.929965    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:35.929973    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:35.929979    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:35.929993    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:35.930006    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:35.930014    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:35.930022    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:35.930049    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:35.930061    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:35.930069    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:35.930077    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:35.930090    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:35.930103    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:37.932047    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 19
	I0818 12:36:37.932061    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:37.932118    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:37.933138    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:37.933151    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:37.933174    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:37.933185    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:37.933192    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:37.933200    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:37.933217    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:37.933224    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:37.933230    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:37.933239    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:37.933245    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:37.933254    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:37.933261    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:37.933270    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:37.933287    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:37.933299    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:37.933315    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:37.933323    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:37.933333    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:39.933662    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 20
	I0818 12:36:39.933674    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:39.933741    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:39.934511    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:39.934565    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:39.934579    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:39.934588    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:39.934597    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:39.934608    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:39.934621    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:39.934630    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:39.934643    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:39.934654    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:39.934662    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:39.934670    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:39.934678    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:39.934683    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:39.934690    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:39.934697    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:39.934705    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:39.934718    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:39.934731    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:41.934832    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 21
	I0818 12:36:41.934847    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:41.934913    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:41.935701    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:41.935736    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:41.935756    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:41.935772    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:41.935783    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:41.935790    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:41.935796    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:41.935806    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:41.935814    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:41.935839    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:41.935853    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:41.935868    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:41.935874    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:41.935882    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:41.935889    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:41.935896    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:41.935904    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:41.935912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:41.935921    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:43.936505    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 22
	I0818 12:36:43.936521    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:43.936583    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:43.937703    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:43.937744    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:43.937755    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:43.937772    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:43.937782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:43.937793    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:43.937800    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:43.937824    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:43.937837    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:43.937845    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:43.937853    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:43.937860    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:43.937868    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:43.937887    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:43.937899    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:43.937907    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:43.937924    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:43.937932    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:43.937940    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:45.938584    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 23
	I0818 12:36:45.938599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:45.938670    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:45.939433    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:45.939496    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:45.939512    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:45.939531    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:45.939550    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:45.939560    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:45.939569    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:45.939578    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:45.939588    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:45.939595    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:45.939603    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:45.939610    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:45.939618    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:45.939625    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:45.939631    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:45.939638    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:45.939646    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:45.939653    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:45.939662    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:47.939836    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 24
	I0818 12:36:47.939851    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:47.939950    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:47.940681    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:47.940746    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:47.940754    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:47.940761    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:47.940768    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:47.940775    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:47.940782    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:47.940790    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:47.940807    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:47.940819    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:47.940827    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:47.940837    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:47.940852    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:47.940865    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:47.940884    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:47.940896    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:47.940904    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:47.940912    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:47.940921    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:49.942832    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 25
	I0818 12:36:49.942848    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:49.942954    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:49.943706    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:49.943775    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:49.943792    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:49.943804    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:49.943810    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:49.943825    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:49.943831    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:49.943837    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:49.943843    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:49.943859    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:49.943869    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:49.943878    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:49.943885    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:49.943897    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:49.943910    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:49.943927    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:49.943936    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:49.943944    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:49.943950    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:51.944857    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 26
	I0818 12:36:51.944870    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:51.944911    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:51.945702    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:51.945755    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:51.945766    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:51.945778    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:51.945794    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:51.945805    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:51.945811    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:51.945818    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:51.945826    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:51.945833    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:51.945839    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:51.945845    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:51.945851    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:51.945860    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:51.945868    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:51.945882    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:51.945890    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:51.945896    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:51.945904    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:53.947244    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 27
	I0818 12:36:53.947260    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:53.947320    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:53.948198    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:53.948236    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:53.948247    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:53.948274    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:53.948283    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:53.948292    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:53.948300    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:53.948307    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:53.948314    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:53.948320    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:53.948337    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:53.948348    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:53.948356    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:53.948365    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:53.948372    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:53.948380    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:53.948387    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:53.948393    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:53.948400    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:55.949561    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 28
	I0818 12:36:55.949576    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:55.949658    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:55.950437    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:55.950489    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:55.950499    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:55.950523    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:55.950532    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:55.950539    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:55.950547    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:55.950557    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:55.950565    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:55.950588    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:55.950599    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:55.950608    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:55.950613    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:55.950621    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:55.950629    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:55.950639    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:55.950647    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:55.950654    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:55.950661    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:57.950587    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Attempt 29
	I0818 12:36:57.950602    5591 main.go:141] libmachine: (offline-docker-476000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:57.950663    5591 main.go:141] libmachine: (offline-docker-476000) DBG | hyperkit pid from json: 5799
	I0818 12:36:57.951427    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Searching for b6:78:b7:43:37:89 in /var/db/dhcpd_leases ...
	I0818 12:36:57.951477    5591 main.go:141] libmachine: (offline-docker-476000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:36:57.951487    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:36:57.951497    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:36:57.951504    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:36:57.951510    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:36:57.951516    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:36:57.951523    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:36:57.951529    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:36:57.951547    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:36:57.951564    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:36:57.951573    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:36:57.951582    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:36:57.951589    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:36:57.951597    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:36:57.951605    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:36:57.951612    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:36:57.951619    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:36:57.951627    5591 main.go:141] libmachine: (offline-docker-476000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:36:59.951636    5591 client.go:171] duration metric: took 1m0.748742275s to LocalClient.Create
	I0818 12:37:01.952487    5591 start.go:128] duration metric: took 1m2.780826027s to createHost
	I0818 12:37:01.952501    5591 start.go:83] releasing machines lock for "offline-docker-476000", held for 1m2.780926062s
	W0818 12:37:01.952600    5591 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-476000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:78:b7:43:37:89
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-476000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:78:b7:43:37:89
	I0818 12:37:01.994576    5591 out.go:201] 
	W0818 12:37:02.015793    5591 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:78:b7:43:37:89
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for b6:78:b7:43:37:89
	W0818 12:37:02.015809    5591 out.go:270] * 
	* 
	W0818 12:37:02.016547    5591 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:37:02.077635    5591 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-476000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-18 12:37:02.181237 -0700 PDT m=+3577.629042363
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-476000 -n offline-docker-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-476000 -n offline-docker-476000: exit status 7 (84.408515ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:37:02.263707    5849 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:37:02.263730    5849 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-476000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-476000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-476000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-476000: (5.253744659s)
--- FAIL: TestOffline (195.35s)

                                                
                                    
x
+
TestAddons/serial/Volcano (198.65s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 12.768202ms
addons_test.go:905: volcano-admission stabilized in 12.806391ms
addons_test.go:897: volcano-scheduler stabilized in 12.931197ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-2vk2m" [98f53129-5dd3-4216-90db-ed07a46e1df2] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00387328s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gwkn5" [8844e85f-92d7-4e6e-8d0c-827c1cb5e09e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004466997s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-9jmqr" [3cde55bd-eda9-4bd6-bd8e-a60ff8cc3005] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002472412s
addons_test.go:932: (dbg) Run:  kubectl --context addons-103000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-103000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-103000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [97b45499-5f49-4250-8f1c-273803cf4678] Pending
helpers_test.go:344: "test-job-nginx-0" [97b45499-5f49-4250-8f1c-273803cf4678] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-103000 -n addons-103000
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-18 11:45:03.676299 -0700 PDT m=+459.094648837
addons_test.go:964: (dbg) Run:  kubectl --context addons-103000 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-103000 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-bd1bb474-52df-4a72-ad01-bdb56cca69e9
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-thsqx (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-thsqx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-103000 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-103000 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-103000 -n addons-103000
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 logs -n 25: (2.415357553s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-948000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| delete  | -p download-only-948000              | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| start   | -o=json --download-only              | download-only-325000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-325000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:38 PDT |
	| delete  | -p download-only-325000              | download-only-325000 | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:38 PDT |
	| delete  | -p download-only-948000              | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:38 PDT |
	| delete  | -p download-only-325000              | download-only-325000 | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:38 PDT |
	| start   | --download-only -p                   | binary-mirror-146000 | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT |                     |
	|         | binary-mirror-146000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49539               |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-146000              | binary-mirror-146000 | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:38 PDT |
	| addons  | enable dashboard -p                  | addons-103000        | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT |                     |
	|         | addons-103000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-103000        | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT |                     |
	|         | addons-103000                        |                      |         |         |                     |                     |
	| start   | -p addons-103000 --wait=true         | addons-103000        | jenkins | v1.33.1 | 18 Aug 24 11:38 PDT | 18 Aug 24 11:41 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:38:05
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:38:05.982575    1619 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:38:05.982758    1619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:38:05.982764    1619 out.go:358] Setting ErrFile to fd 2...
	I0818 11:38:05.982768    1619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:38:05.982950    1619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 11:38:05.984512    1619 out.go:352] Setting JSON to false
	I0818 11:38:06.007574    1619 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":456,"bootTime":1724005829,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 11:38:06.007673    1619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:38:06.028650    1619 out.go:177] * [addons-103000] minikube v1.33.1 on Darwin 14.6.1
	I0818 11:38:06.070691    1619 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:38:06.070760    1619 notify.go:220] Checking for updates...
	I0818 11:38:06.112464    1619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:38:06.133615    1619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 11:38:06.154331    1619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:38:06.175584    1619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 11:38:06.196687    1619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:38:06.217789    1619 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:38:06.247773    1619 out.go:177] * Using the hyperkit driver based on user configuration
	I0818 11:38:06.289340    1619 start.go:297] selected driver: hyperkit
	I0818 11:38:06.289370    1619 start.go:901] validating driver "hyperkit" against <nil>
	I0818 11:38:06.289389    1619 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:38:06.293757    1619 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:38:06.293875    1619 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 11:38:06.302443    1619 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 11:38:06.306348    1619 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:06.306371    1619 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 11:38:06.306407    1619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:38:06.306613    1619 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 11:38:06.306677    1619 cni.go:84] Creating CNI manager for ""
	I0818 11:38:06.306697    1619 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 11:38:06.306703    1619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 11:38:06.306757    1619 start.go:340] cluster config:
	{Name:addons-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:38:06.306851    1619 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:38:06.348879    1619 out.go:177] * Starting "addons-103000" primary control-plane node in "addons-103000" cluster
	I0818 11:38:06.369642    1619 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:38:06.369707    1619 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 11:38:06.369736    1619 cache.go:56] Caching tarball of preloaded images
	I0818 11:38:06.369973    1619 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 11:38:06.370012    1619 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 11:38:06.370536    1619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/config.json ...
	I0818 11:38:06.370578    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/config.json: {Name:mk3b471d003db4c4da59bb7b734b3527c0e78aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:06.371235    1619 start.go:360] acquireMachinesLock for addons-103000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 11:38:06.371451    1619 start.go:364] duration metric: took 196.017µs to acquireMachinesLock for "addons-103000"
	I0818 11:38:06.371497    1619 start.go:93] Provisioning new machine with config: &{Name:addons-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:addons-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 11:38:06.371593    1619 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 11:38:06.392866    1619 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0818 11:38:06.393120    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:06.393200    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:06.403429    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49546
	I0818 11:38:06.403776    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:06.404195    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:06.404205    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:06.404416    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:06.404541    1619 main.go:141] libmachine: (addons-103000) Calling .GetMachineName
	I0818 11:38:06.404626    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:06.404741    1619 start.go:159] libmachine.API.Create for "addons-103000" (driver="hyperkit")
	I0818 11:38:06.404773    1619 client.go:168] LocalClient.Create starting
	I0818 11:38:06.404809    1619 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 11:38:06.518520    1619 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 11:38:06.576009    1619 main.go:141] libmachine: Running pre-create checks...
	I0818 11:38:06.576021    1619 main.go:141] libmachine: (addons-103000) Calling .PreCreateCheck
	I0818 11:38:06.576174    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:06.576325    1619 main.go:141] libmachine: (addons-103000) Calling .GetConfigRaw
	I0818 11:38:06.576745    1619 main.go:141] libmachine: Creating machine...
	I0818 11:38:06.576758    1619 main.go:141] libmachine: (addons-103000) Calling .Create
	I0818 11:38:06.576824    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:06.576962    1619 main.go:141] libmachine: (addons-103000) DBG | I0818 11:38:06.576821    1627 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 11:38:06.577049    1619 main.go:141] libmachine: (addons-103000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 11:38:06.853117    1619 main.go:141] libmachine: (addons-103000) DBG | I0818 11:38:06.853006    1627 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa...
	I0818 11:38:06.989171    1619 main.go:141] libmachine: (addons-103000) DBG | I0818 11:38:06.989100    1627 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/addons-103000.rawdisk...
	I0818 11:38:06.989198    1619 main.go:141] libmachine: (addons-103000) DBG | Writing magic tar header
	I0818 11:38:06.989212    1619 main.go:141] libmachine: (addons-103000) DBG | Writing SSH key tar header
	I0818 11:38:06.989613    1619 main.go:141] libmachine: (addons-103000) DBG | I0818 11:38:06.989582    1627 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000 ...
	I0818 11:38:07.496113    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:07.496139    1619 main.go:141] libmachine: (addons-103000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/hyperkit.pid
	I0818 11:38:07.496234    1619 main.go:141] libmachine: (addons-103000) DBG | Using UUID a8b30e99-108c-4f88-8d39-79d4089a7b73
	I0818 11:38:07.757063    1619 main.go:141] libmachine: (addons-103000) DBG | Generated MAC 42:f:73:12:11:a3
	I0818 11:38:07.757093    1619 main.go:141] libmachine: (addons-103000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-103000
	I0818 11:38:07.757140    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a8b30e99-108c-4f88-8d39-79d4089a7b73", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 11:38:07.757176    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a8b30e99-108c-4f88-8d39-79d4089a7b73", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 11:38:07.757288    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a8b30e99-108c-4f88-8d39-79d4089a7b73", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/addons-103000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/addons-103000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-103000"}
	I0818 11:38:07.757339    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a8b30e99-108c-4f88-8d39-79d4089a7b73 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/addons-103000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-103000"
	I0818 11:38:07.757359    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 11:38:07.760327    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 DEBUG: hyperkit: Pid is 1632
	I0818 11:38:07.760740    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 0
	I0818 11:38:07.760750    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:07.760853    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:07.761812    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:07.778337    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 11:38:07.841518    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 11:38:07.842237    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 11:38:07.842256    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 11:38:07.842265    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 11:38:07.842275    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 11:38:08.379828    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 11:38:08.379844    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 11:38:08.496693    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 11:38:08.496713    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 11:38:08.496732    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 11:38:08.496743    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 11:38:08.497582    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 11:38:08.497592    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 11:38:09.762025    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 1
	I0818 11:38:09.762044    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:09.762116    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:09.762893    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:11.764964    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 2
	I0818 11:38:11.764991    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:11.765100    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:11.765851    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:13.766992    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 3
	I0818 11:38:13.767007    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:13.767069    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:13.767806    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:14.261229    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 11:38:14.261306    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 11:38:14.261319    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 11:38:14.280148    1619 main.go:141] libmachine: (addons-103000) DBG | 2024/08/18 11:38:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 11:38:15.767979    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 4
	I0818 11:38:15.767993    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:15.768081    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:15.768824    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:17.770249    1619 main.go:141] libmachine: (addons-103000) DBG | Attempt 5
	I0818 11:38:17.770279    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:17.770490    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:17.771869    1619 main.go:141] libmachine: (addons-103000) DBG | Searching for 42:f:73:12:11:a3 in /var/db/dhcpd_leases ...
	I0818 11:38:17.771982    1619 main.go:141] libmachine: (addons-103000) DBG | Found 1 entries in /var/db/dhcpd_leases!
	I0818 11:38:17.772006    1619 main.go:141] libmachine: (addons-103000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 11:38:17.772020    1619 main.go:141] libmachine: (addons-103000) DBG | Found match: 42:f:73:12:11:a3
	I0818 11:38:17.772035    1619 main.go:141] libmachine: (addons-103000) DBG | IP: 192.169.0.2
	I0818 11:38:17.772109    1619 main.go:141] libmachine: (addons-103000) Calling .GetConfigRaw
	I0818 11:38:17.772937    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:17.773081    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:17.773226    1619 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 11:38:17.773238    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:17.773350    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:17.773432    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:17.774380    1619 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 11:38:17.774391    1619 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 11:38:17.774403    1619 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 11:38:17.774409    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:17.774508    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:17.774605    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:17.774711    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:17.774798    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:17.775354    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:17.775522    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:17.775532    1619 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 11:38:18.839775    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 11:38:18.839787    1619 main.go:141] libmachine: Detecting the provisioner...
	I0818 11:38:18.839793    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:18.839928    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:18.840050    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.840145    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.840234    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:18.840337    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:18.840465    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:18.840476    1619 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 11:38:18.906452    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 11:38:18.906514    1619 main.go:141] libmachine: found compatible host: buildroot
	I0818 11:38:18.906520    1619 main.go:141] libmachine: Provisioning with buildroot...
	I0818 11:38:18.906525    1619 main.go:141] libmachine: (addons-103000) Calling .GetMachineName
	I0818 11:38:18.906659    1619 buildroot.go:166] provisioning hostname "addons-103000"
	I0818 11:38:18.906671    1619 main.go:141] libmachine: (addons-103000) Calling .GetMachineName
	I0818 11:38:18.906764    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:18.906849    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:18.906962    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.907064    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.907151    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:18.907285    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:18.907435    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:18.907443    1619 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-103000 && echo "addons-103000" | sudo tee /etc/hostname
	I0818 11:38:18.982041    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-103000
	
	I0818 11:38:18.982059    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:18.982193    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:18.982291    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.982380    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:18.982473    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:18.982595    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:18.982737    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:18.982750    1619 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-103000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-103000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-103000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 11:38:19.051611    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 11:38:19.051631    1619 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 11:38:19.051647    1619 buildroot.go:174] setting up certificates
	I0818 11:38:19.051660    1619 provision.go:84] configureAuth start
	I0818 11:38:19.051668    1619 main.go:141] libmachine: (addons-103000) Calling .GetMachineName
	I0818 11:38:19.051805    1619 main.go:141] libmachine: (addons-103000) Calling .GetIP
	I0818 11:38:19.051914    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:19.051997    1619 provision.go:143] copyHostCerts
	I0818 11:38:19.052095    1619 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 11:38:19.052393    1619 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 11:38:19.052589    1619 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 11:38:19.052740    1619 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.addons-103000 san=[127.0.0.1 192.169.0.2 addons-103000 localhost minikube]
	I0818 11:38:19.284517    1619 provision.go:177] copyRemoteCerts
	I0818 11:38:19.284858    1619 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 11:38:19.284876    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:19.285047    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:19.285138    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.285231    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:19.285315    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:19.325045    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 11:38:19.346575    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 11:38:19.366401    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 11:38:19.387070    1619 provision.go:87] duration metric: took 335.397384ms to configureAuth
	I0818 11:38:19.387085    1619 buildroot.go:189] setting minikube options for container-runtime
	I0818 11:38:19.387235    1619 config.go:182] Loaded profile config "addons-103000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:38:19.387249    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:19.387388    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:19.387485    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:19.387583    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.387666    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.387740    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:19.387868    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:19.387993    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:19.388002    1619 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 11:38:19.451715    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 11:38:19.451730    1619 buildroot.go:70] root file system type: tmpfs
	I0818 11:38:19.451829    1619 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 11:38:19.451843    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:19.451967    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:19.452061    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.452144    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.452228    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:19.452350    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:19.452487    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:19.452531    1619 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 11:38:19.527954    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 11:38:19.527987    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:19.528135    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:19.528230    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.528318    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:19.528415    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:19.528566    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:19.528714    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:19.528728    1619 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 11:38:21.085441    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 11:38:21.085457    1619 main.go:141] libmachine: Checking connection to Docker...
	I0818 11:38:21.085463    1619 main.go:141] libmachine: (addons-103000) Calling .GetURL
	I0818 11:38:21.085607    1619 main.go:141] libmachine: Docker is up and running!
	I0818 11:38:21.085615    1619 main.go:141] libmachine: Reticulating splines...
	I0818 11:38:21.085619    1619 client.go:171] duration metric: took 14.680944012s to LocalClient.Create
	I0818 11:38:21.085630    1619 start.go:167] duration metric: took 14.680994727s to libmachine.API.Create "addons-103000"
	I0818 11:38:21.085640    1619 start.go:293] postStartSetup for "addons-103000" (driver="hyperkit")
	I0818 11:38:21.085648    1619 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 11:38:21.085657    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:21.085785    1619 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 11:38:21.085798    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:21.085891    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:21.085997    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:21.086086    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:21.086176    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:21.130960    1619 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 11:38:21.135379    1619 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 11:38:21.135400    1619 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 11:38:21.135494    1619 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 11:38:21.135546    1619 start.go:296] duration metric: took 49.900925ms for postStartSetup
	I0818 11:38:21.135570    1619 main.go:141] libmachine: (addons-103000) Calling .GetConfigRaw
	I0818 11:38:21.136176    1619 main.go:141] libmachine: (addons-103000) Calling .GetIP
	I0818 11:38:21.136723    1619 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/config.json ...
	I0818 11:38:21.137471    1619 start.go:128] duration metric: took 14.765967655s to createHost
	I0818 11:38:21.137493    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:21.137603    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:21.137687    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:21.137789    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:21.137869    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:21.137976    1619 main.go:141] libmachine: Using SSH client type: native
	I0818 11:38:21.138103    1619 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x2da4ea0] 0x2da7c00 <nil>  [] 0s} 192.169.0.2 22 <nil> <nil>}
	I0818 11:38:21.138110    1619 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 11:38:21.207343    1619 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724006300.800260180
	
	I0818 11:38:21.207354    1619 fix.go:216] guest clock: 1724006300.800260180
	I0818 11:38:21.207359    1619 fix.go:229] Guest: 2024-08-18 11:38:20.80026018 -0700 PDT Remote: 2024-08-18 11:38:21.137481 -0700 PDT m=+15.190781430 (delta=-337.22082ms)
	I0818 11:38:21.207376    1619 fix.go:200] guest clock delta is within tolerance: -337.22082ms
	I0818 11:38:21.207381    1619 start.go:83] releasing machines lock for "addons-103000", held for 14.836021386s
	I0818 11:38:21.207398    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:21.207536    1619 main.go:141] libmachine: (addons-103000) Calling .GetIP
	I0818 11:38:21.207633    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:21.207933    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:21.208079    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:21.208217    1619 ssh_runner.go:195] Run: cat /version.json
	I0818 11:38:21.208228    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:21.208328    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:21.208394    1619 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 11:38:21.208421    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:21.208427    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:21.208522    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:21.208541    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:21.208618    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:21.208625    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:21.208722    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:21.208817    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:21.247868    1619 ssh_runner.go:195] Run: systemctl --version
	I0818 11:38:21.308723    1619 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 11:38:21.313443    1619 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 11:38:21.313488    1619 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 11:38:21.325875    1619 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 11:38:21.325885    1619 start.go:495] detecting cgroup driver to use...
	I0818 11:38:21.325979    1619 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 11:38:21.342275    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 11:38:21.350540    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 11:38:21.358776    1619 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 11:38:21.358817    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 11:38:21.367082    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 11:38:21.376995    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 11:38:21.386506    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 11:38:21.396132    1619 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 11:38:21.405171    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 11:38:21.413848    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 11:38:21.422502    1619 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 11:38:21.432027    1619 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 11:38:21.440754    1619 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 11:38:21.449024    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:21.552657    1619 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 11:38:21.570048    1619 start.go:495] detecting cgroup driver to use...
	I0818 11:38:21.570125    1619 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 11:38:21.581238    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 11:38:21.593008    1619 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 11:38:21.608111    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 11:38:21.619555    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 11:38:21.630591    1619 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 11:38:21.655945    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 11:38:21.666892    1619 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 11:38:21.682312    1619 ssh_runner.go:195] Run: which cri-dockerd
	I0818 11:38:21.685402    1619 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 11:38:21.693366    1619 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 11:38:21.706642    1619 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 11:38:21.806589    1619 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 11:38:21.908985    1619 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 11:38:21.909059    1619 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 11:38:21.924791    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:22.028105    1619 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 11:38:24.321678    1619 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.29356916s)
	I0818 11:38:24.321749    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 11:38:24.334038    1619 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 11:38:24.347844    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 11:38:24.358081    1619 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 11:38:24.457821    1619 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 11:38:24.560547    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:24.679693    1619 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 11:38:24.694355    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 11:38:24.705528    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:24.810575    1619 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 11:38:24.871473    1619 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 11:38:24.871925    1619 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 11:38:24.876398    1619 start.go:563] Will wait 60s for crictl version
	I0818 11:38:24.876449    1619 ssh_runner.go:195] Run: which crictl
	I0818 11:38:24.879801    1619 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 11:38:24.907814    1619 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 11:38:24.907892    1619 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 11:38:24.926299    1619 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 11:38:24.990772    1619 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 11:38:24.990803    1619 main.go:141] libmachine: (addons-103000) Calling .GetIP
	I0818 11:38:24.991270    1619 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 11:38:24.994824    1619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 11:38:25.005240    1619 kubeadm.go:883] updating cluster {Name:addons-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:addons-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 11:38:25.005321    1619 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:38:25.005371    1619 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 11:38:25.023206    1619 docker.go:685] Got preloaded images: 
	I0818 11:38:25.023218    1619 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0818 11:38:25.023259    1619 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 11:38:25.032783    1619 ssh_runner.go:195] Run: which lz4
	I0818 11:38:25.036077    1619 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 11:38:25.039334    1619 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 11:38:25.039352    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0818 11:38:26.133301    1619 docker.go:649] duration metric: took 1.097275185s to copy over tarball
	I0818 11:38:26.133366    1619 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 11:38:28.663130    1619 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.529765253s)
	I0818 11:38:28.663145    1619 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 11:38:28.690058    1619 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0818 11:38:28.698882    1619 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0818 11:38:28.712275    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:28.812482    1619 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 11:38:31.250943    1619 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.438458422s)
	I0818 11:38:31.251022    1619 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 11:38:31.265367    1619 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0818 11:38:31.265394    1619 cache_images.go:84] Images are preloaded, skipping loading
	I0818 11:38:31.265415    1619 kubeadm.go:934] updating node { 192.169.0.2 8443 v1.31.0 docker true true} ...
	I0818 11:38:31.265498    1619 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-103000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 11:38:31.265565    1619 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 11:38:31.305384    1619 cni.go:84] Creating CNI manager for ""
	I0818 11:38:31.305399    1619 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 11:38:31.305409    1619 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 11:38:31.305424    1619 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-103000 NodeName:addons-103000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 11:38:31.305506    1619 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-103000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 11:38:31.305572    1619 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 11:38:31.314020    1619 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 11:38:31.314073    1619 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 11:38:31.322186    1619 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 11:38:31.336940    1619 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 11:38:31.350922    1619 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0818 11:38:31.365736    1619 ssh_runner.go:195] Run: grep 192.169.0.2	control-plane.minikube.internal$ /etc/hosts
	I0818 11:38:31.368568    1619 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 11:38:31.379006    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:31.482240    1619 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 11:38:31.498830    1619 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000 for IP: 192.169.0.2
	I0818 11:38:31.498842    1619 certs.go:194] generating shared ca certs ...
	I0818 11:38:31.498854    1619 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.499042    1619 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 11:38:31.600783    1619 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt ...
	I0818 11:38:31.600798    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt: {Name:mkdb1a8749b0a1d465ea483cfe434fdf96bfc020 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.601127    1619 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key ...
	I0818 11:38:31.601134    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key: {Name:mk5cce7644c99af7191823667d4edbb0fc3b8ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.601345    1619 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 11:38:31.651355    1619 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt ...
	I0818 11:38:31.651372    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt: {Name:mk19da610b5ecc00e9c9dc6ab4e9313c662000c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.651644    1619 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key ...
	I0818 11:38:31.651652    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key: {Name:mkbf7d1518b6ed2b607bfcaacf201d5e8ed9e586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.651850    1619 certs.go:256] generating profile certs ...
	I0818 11:38:31.651901    1619 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.key
	I0818 11:38:31.651915    1619 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt with IP's: []
	I0818 11:38:31.721320    1619 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt ...
	I0818 11:38:31.721334    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: {Name:mkafe85c5b492ea6a2dfbe209ece00bbcf7c650b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.721645    1619 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.key ...
	I0818 11:38:31.721653    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.key: {Name:mkc65430d5cb11feac96971a353ba4227039f881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.721852    1619 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key.cf972efc
	I0818 11:38:31.721869    1619 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt.cf972efc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.2]
	I0818 11:38:31.786081    1619 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt.cf972efc ...
	I0818 11:38:31.786096    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt.cf972efc: {Name:mk85883b9c7813b56e50d2bb6a385723e8c8f8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.786384    1619 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key.cf972efc ...
	I0818 11:38:31.786394    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key.cf972efc: {Name:mk34af8d61f7adc957fd5e5255b41e25ec8d1c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.786608    1619 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt.cf972efc -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt
	I0818 11:38:31.786784    1619 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key.cf972efc -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key
	I0818 11:38:31.786943    1619 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.key
	I0818 11:38:31.786961    1619 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.crt with IP's: []
	I0818 11:38:31.859431    1619 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.crt ...
	I0818 11:38:31.859445    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.crt: {Name:mk9548e37045c5ad38b99b1ec17a433c805aa98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.859729    1619 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.key ...
	I0818 11:38:31.859738    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.key: {Name:mkbef8adfaca4a1a937d8539913a653fb8133da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:31.860151    1619 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 11:38:31.860199    1619 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 11:38:31.860243    1619 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 11:38:31.860283    1619 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 11:38:31.860751    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 11:38:31.883548    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 11:38:31.904243    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 11:38:31.924355    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 11:38:31.945770    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0818 11:38:31.965501    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 11:38:31.986446    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 11:38:32.007781    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 11:38:32.028114    1619 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 11:38:32.049531    1619 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 11:38:32.064310    1619 ssh_runner.go:195] Run: openssl version
	I0818 11:38:32.068550    1619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 11:38:32.077624    1619 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 11:38:32.081154    1619 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 11:38:32.081190    1619 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 11:38:32.085825    1619 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 11:38:32.094795    1619 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 11:38:32.097882    1619 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 11:38:32.097919    1619 kubeadm.go:392] StartCluster: {Name:addons-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:addons-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:38:32.098000    1619 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 11:38:32.111171    1619 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 11:38:32.118590    1619 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 11:38:32.126357    1619 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 11:38:32.134623    1619 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 11:38:32.134633    1619 kubeadm.go:157] found existing configuration files:
	
	I0818 11:38:32.134671    1619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 11:38:32.145439    1619 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 11:38:32.145512    1619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 11:38:32.155043    1619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 11:38:32.168341    1619 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 11:38:32.168395    1619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 11:38:32.178816    1619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 11:38:32.192494    1619 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 11:38:32.192540    1619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 11:38:32.200043    1619 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 11:38:32.207255    1619 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 11:38:32.207297    1619 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 11:38:32.214644    1619 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 11:38:32.250593    1619 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 11:38:32.250644    1619 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 11:38:32.326777    1619 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 11:38:32.326878    1619 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 11:38:32.326961    1619 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 11:38:32.335581    1619 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 11:38:32.388685    1619 out.go:235]   - Generating certificates and keys ...
	I0818 11:38:32.388749    1619 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 11:38:32.388823    1619 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 11:38:32.608792    1619 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 11:38:33.139682    1619 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 11:38:33.279892    1619 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 11:38:33.396559    1619 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 11:38:33.549142    1619 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 11:38:33.549232    1619 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-103000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0818 11:38:33.772832    1619 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 11:38:33.773028    1619 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-103000 localhost] and IPs [192.169.0.2 127.0.0.1 ::1]
	I0818 11:38:34.162984    1619 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 11:38:34.298735    1619 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 11:38:34.502207    1619 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 11:38:34.502367    1619 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 11:38:34.790453    1619 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 11:38:34.867540    1619 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 11:38:34.963055    1619 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 11:38:35.616378    1619 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 11:38:36.020291    1619 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 11:38:36.020706    1619 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 11:38:36.025028    1619 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 11:38:36.046614    1619 out.go:235]   - Booting up control plane ...
	I0818 11:38:36.046712    1619 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 11:38:36.046801    1619 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 11:38:36.046874    1619 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 11:38:36.046967    1619 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 11:38:36.047045    1619 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 11:38:36.047084    1619 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 11:38:36.161085    1619 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 11:38:36.161187    1619 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 11:38:36.661268    1619 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.915011ms
	I0818 11:38:36.661371    1619 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 11:38:41.159289    1619 kubeadm.go:310] [api-check] The API server is healthy after 4.501168336s
	I0818 11:38:41.170394    1619 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 11:38:41.177760    1619 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 11:38:41.192757    1619 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 11:38:41.192914    1619 kubeadm.go:310] [mark-control-plane] Marking the node addons-103000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 11:38:41.199045    1619 kubeadm.go:310] [bootstrap-token] Using token: cl2h5l.nig37upukn3qodi2
	I0818 11:38:41.238840    1619 out.go:235]   - Configuring RBAC rules ...
	I0818 11:38:41.238922    1619 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 11:38:41.242847    1619 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 11:38:41.273144    1619 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 11:38:41.275220    1619 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 11:38:41.277281    1619 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 11:38:41.279147    1619 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 11:38:41.563392    1619 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 11:38:41.980119    1619 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 11:38:42.563441    1619 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 11:38:42.563982    1619 kubeadm.go:310] 
	I0818 11:38:42.564037    1619 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 11:38:42.564051    1619 kubeadm.go:310] 
	I0818 11:38:42.564141    1619 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 11:38:42.564150    1619 kubeadm.go:310] 
	I0818 11:38:42.564168    1619 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 11:38:42.564223    1619 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 11:38:42.564269    1619 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 11:38:42.564281    1619 kubeadm.go:310] 
	I0818 11:38:42.564337    1619 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 11:38:42.564357    1619 kubeadm.go:310] 
	I0818 11:38:42.564426    1619 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 11:38:42.564433    1619 kubeadm.go:310] 
	I0818 11:38:42.564475    1619 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 11:38:42.564539    1619 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 11:38:42.564605    1619 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 11:38:42.564617    1619 kubeadm.go:310] 
	I0818 11:38:42.564685    1619 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 11:38:42.564755    1619 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 11:38:42.564761    1619 kubeadm.go:310] 
	I0818 11:38:42.564831    1619 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cl2h5l.nig37upukn3qodi2 \
	I0818 11:38:42.564913    1619 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f3219daeafc7d9f43b6059f3745ba2c0275f1db525d049b657fd827dc6266aef \
	I0818 11:38:42.564930    1619 kubeadm.go:310] 	--control-plane 
	I0818 11:38:42.564936    1619 kubeadm.go:310] 
	I0818 11:38:42.565004    1619 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 11:38:42.565012    1619 kubeadm.go:310] 
	I0818 11:38:42.565082    1619 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cl2h5l.nig37upukn3qodi2 \
	I0818 11:38:42.565168    1619 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f3219daeafc7d9f43b6059f3745ba2c0275f1db525d049b657fd827dc6266aef 
	I0818 11:38:42.566026    1619 kubeadm.go:310] W0818 18:38:31.850037    1581 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 11:38:42.566266    1619 kubeadm.go:310] W0818 18:38:31.850512    1581 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 11:38:42.566353    1619 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 11:38:42.566363    1619 cni.go:84] Creating CNI manager for ""
	I0818 11:38:42.566372    1619 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 11:38:42.587857    1619 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 11:38:42.646232    1619 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 11:38:42.656290    1619 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 11:38:42.669954    1619 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 11:38:42.670033    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:42.670034    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-103000 minikube.k8s.io/updated_at=2024_08_18T11_38_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=addons-103000 minikube.k8s.io/primary=true
	I0818 11:38:42.681056    1619 ops.go:34] apiserver oom_adj: -16
	I0818 11:38:42.763024    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:43.263259    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:43.763165    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:44.263420    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:44.764442    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:45.263637    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:45.764402    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:46.265066    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:46.764229    1619 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 11:38:46.833127    1619 kubeadm.go:1113] duration metric: took 4.163189361s to wait for elevateKubeSystemPrivileges
	I0818 11:38:46.833143    1619 kubeadm.go:394] duration metric: took 14.735332198s to StartCluster
	I0818 11:38:46.833160    1619 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:46.833309    1619 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:38:46.833580    1619 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:38:46.833852    1619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 11:38:46.833881    1619 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 11:38:46.833916    1619 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0818 11:38:46.833971    1619 addons.go:69] Setting yakd=true in profile "addons-103000"
	I0818 11:38:46.833994    1619 addons.go:234] Setting addon yakd=true in "addons-103000"
	I0818 11:38:46.833983    1619 addons.go:69] Setting inspektor-gadget=true in profile "addons-103000"
	I0818 11:38:46.834019    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834028    1619 addons.go:234] Setting addon inspektor-gadget=true in "addons-103000"
	I0818 11:38:46.834038    1619 addons.go:69] Setting gcp-auth=true in profile "addons-103000"
	I0818 11:38:46.834061    1619 config.go:182] Loaded profile config "addons-103000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:38:46.834066    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834062    1619 addons.go:69] Setting storage-provisioner=true in profile "addons-103000"
	I0818 11:38:46.834089    1619 mustload.go:65] Loading cluster: addons-103000
	I0818 11:38:46.834090    1619 addons.go:69] Setting metrics-server=true in profile "addons-103000"
	I0818 11:38:46.834081    1619 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-103000"
	I0818 11:38:46.834159    1619 addons.go:69] Setting ingress=true in profile "addons-103000"
	I0818 11:38:46.834158    1619 addons.go:69] Setting registry=true in profile "addons-103000"
	I0818 11:38:46.834168    1619 addons.go:234] Setting addon metrics-server=true in "addons-103000"
	I0818 11:38:46.834161    1619 addons.go:69] Setting volcano=true in profile "addons-103000"
	I0818 11:38:46.834196    1619 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-103000"
	I0818 11:38:46.834211    1619 addons.go:234] Setting addon ingress=true in "addons-103000"
	I0818 11:38:46.834233    1619 addons.go:234] Setting addon volcano=true in "addons-103000"
	I0818 11:38:46.834238    1619 addons.go:234] Setting addon registry=true in "addons-103000"
	I0818 11:38:46.834199    1619 addons.go:69] Setting volumesnapshots=true in profile "addons-103000"
	I0818 11:38:46.834259    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834266    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834268    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834291    1619 addons.go:234] Setting addon volumesnapshots=true in "addons-103000"
	I0818 11:38:46.834306    1619 config.go:182] Loaded profile config "addons-103000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:38:46.834281    1619 addons.go:69] Setting ingress-dns=true in profile "addons-103000"
	I0818 11:38:46.834353    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834364    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834367    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834378    1619 addons.go:234] Setting addon ingress-dns=true in "addons-103000"
	I0818 11:38:46.834398    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.834443    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.834448    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.834624    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.834747    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.834769    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.834773    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.834804    1619 addons.go:69] Setting helm-tiller=true in profile "addons-103000"
	I0818 11:38:46.834815    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.834818    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.835557    1619 addons.go:234] Setting addon helm-tiller=true in "addons-103000"
	I0818 11:38:46.835903    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.836060    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.836163    1619 addons.go:69] Setting cloud-spanner=true in profile "addons-103000"
	I0818 11:38:46.836185    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.836324    1619 addons.go:234] Setting addon cloud-spanner=true in "addons-103000"
	I0818 11:38:46.836330    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.836361    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.836387    1619 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-103000"
	I0818 11:38:46.836443    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.836513    1619 addons.go:69] Setting default-storageclass=true in profile "addons-103000"
	I0818 11:38:46.836535    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.836618    1619 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-103000"
	I0818 11:38:46.834118    1619 addons.go:234] Setting addon storage-provisioner=true in "addons-103000"
	I0818 11:38:46.836646    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.836688    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.836726    1619 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-103000"
	I0818 11:38:46.836735    1619 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-103000"
	I0818 11:38:46.836621    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.836813    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.836814    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.836908    1619 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-103000"
	I0818 11:38:46.836929    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.836868    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.836962    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837007    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837555    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.837578    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837632    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.837659    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837676    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.838032    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837704    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.837787    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.838064    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.837825    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.838082    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.839184    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.849811    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49569
	I0818 11:38:46.851525    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49570
	I0818 11:38:46.855026    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49573
	I0818 11:38:46.855846    1619 out.go:177] * Verifying Kubernetes components...
	I0818 11:38:46.855858    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.855906    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.856490    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49574
	I0818 11:38:46.858186    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.858237    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.859997    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49576
	I0818 11:38:46.892460    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.860144    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49577
	I0818 11:38:46.860150    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49579
	I0818 11:38:46.860188    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49578
	I0818 11:38:46.863938    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49580
	I0818 11:38:46.863979    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49581
	I0818 11:38:46.865492    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49582
	I0818 11:38:46.867606    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49583
	I0818 11:38:46.867846    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49584
	I0818 11:38:46.870236    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49585
	I0818 11:38:46.871181    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49586
	I0818 11:38:46.871760    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49587
	I0818 11:38:46.893083    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.893101    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.893108    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893153    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.893162    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893217    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893238    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.893168    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893174    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.893391    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893457    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893543    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893611    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893616    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893618    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.893630    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.893654    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893666    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893691    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.893702    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.893715    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.893729    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.893750    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.893764    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.893865    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.894215    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894225    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894233    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894272    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894288    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.894295    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.894370    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894387    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894398    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894405    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894418    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894424    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894399    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894437    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894455    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.894457    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894466    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.894473    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894488    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.894513    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894525    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894570    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894584    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894589    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.894603    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.894623    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.896404    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.896560    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.896634    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.896608    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.896448    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.897009    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.897040    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.897044    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.897254    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.897527    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.897334    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.898254    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.898323    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.896687    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.898280    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.898756    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.898781    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.897217    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.899732    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.897244    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.900165    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.900259    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.900334    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.900565    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.900622    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.900667    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.900690    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.900952    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.900973    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.901064    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.901054    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.901094    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.901196    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.901213    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.901298    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.901350    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.901383    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.901410    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.930474    1619 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 11:38:46.930798    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49602
	I0818 11:38:46.931633    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49601
	I0818 11:38:46.931672    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.931935    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49603
	I0818 11:38:46.932048    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.932117    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.932159    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.932309    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.932426    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.932492    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.932745    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.938555    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.938701    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.938888    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.944530    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.944662    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.944613    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.947023    1619 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 11:38:46.947250    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.947325    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.947338    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.947711    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49607
	I0818 11:38:46.951522    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.951588    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49609
	I0818 11:38:46.951568    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.951621    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.951628    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49608
	I0818 11:38:46.953409    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.953544    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.953661    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.953797    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.953828    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49611
	I0818 11:38:46.958423    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.958473    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.958486    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:46.958495    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:46.958520    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.958545    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.958503    1619 addons.go:234] Setting addon default-storageclass=true in "addons-103000"
	I0818 11:38:46.958582    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:46.958616    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.958583    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.958670    1619 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-103000"
	I0818 11:38:46.960167    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49614
	I0818 11:38:46.960443    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.960478    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:46.960720    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.960706    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.960696    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.960896    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.961139    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.965743    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49617
	I0818 11:38:46.965840    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.965875    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.965875    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.965878    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.965900    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.965907    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.965911    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49618
	I0818 11:38:46.965914    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.965974    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:46.966001    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:46.966044    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.966055    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.966081    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.967647    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.967728    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.967673    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.967931    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.967917    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.967903    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.971451    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49622
	I0818 11:38:46.998575    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.971437    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.998607    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.998622    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.971468    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49621
	I0818 11:38:46.998650    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.998698    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.998752    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.974351    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49623
	I0818 11:38:46.998783    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.998795    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:46.974771    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49624
	I0818 11:38:46.977549    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49625
	I0818 11:38:46.978751    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49626
	I0818 11:38:46.999007    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:46.999021    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.998118    1619 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0818 11:38:46.999040    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.999058    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:46.999108    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:46.999244    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.034536    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.034546    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:46.999401    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.999408    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.999454    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.999455    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.999484    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.999491    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:46.999617    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:46.999676    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:47.000091    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.000164    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.034922    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.000220    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.034984    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.000443    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.034246    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0818 11:38:47.034960    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.092238    1619 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 11:38:47.035145    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.092256    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0818 11:38:47.092276    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.092279    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.035177    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.035208    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.035221    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.035257    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.114471    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.035263    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.035345    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.035345    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.036368    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.071304    1619 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0818 11:38:47.071707    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.072750    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.092340    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.092461    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.092578    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.101583    1619 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 11:38:47.114242    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0818 11:38:47.114489    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.114553    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.114742    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.135306    1619 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0818 11:38:47.135603    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.135632    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.156116    1619 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0818 11:38:47.193230    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0818 11:38:47.193617    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.193707    1619 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0818 11:38:47.194232    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.194321    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.194322    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.194360    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.194361    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.194401    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.194586    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.230358    1619 out.go:177]   - Using image docker.io/registry:2.8.3
	I0818 11:38:47.230562    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.230734    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.231862    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.267041    1619 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0818 11:38:47.267379    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.267414    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.267419    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.267433    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.267664    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:47.267668    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:47.268446    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.287447    1619 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0818 11:38:47.287858    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.287898    1619 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0818 11:38:47.288067    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.324414    1619 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0818 11:38:47.324415    1619 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0818 11:38:47.324698    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.324747    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.324754    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:47.324779    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:47.324982    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.324982    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.325002    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.325094    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.326725    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.326727    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.345357    1619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0818 11:38:47.345761    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0818 11:38:47.345804    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0818 11:38:47.382872    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.382947    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.383230    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.393234    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49635
	I0818 11:38:47.393726    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49636
	I0818 11:38:47.419440    1619 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0818 11:38:47.419838    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.434950    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 11:38:47.456150    1619 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 11:38:47.456404    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.456431    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.456777    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.456974    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:47.456979    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:47.465900    1619 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0818 11:38:47.466578    1619 node_ready.go:35] waiting up to 6m0s for node "addons-103000" to be "Ready" ...
	I0818 11:38:47.477170    1619 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 11:38:47.477465    1619 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 11:38:47.514651    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.477605    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.477647    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.477663    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.477625    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.477982    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.478043    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:47.514463    1619 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0818 11:38:47.514827    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.514779    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:47.514463    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0818 11:38:47.514864    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.514866    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.514865    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.514907    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.551026    1619 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0818 11:38:47.551041    1619 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0818 11:38:47.551599    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.551603    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:47.588342    1619 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0818 11:38:47.588378    1619 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 11:38:47.588726    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.588748    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.588755    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.589677    1619 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0818 11:38:47.625397    1619 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0818 11:38:47.625574    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0818 11:38:47.625587    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 11:38:47.625633    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.625650    1619 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0818 11:38:47.625607    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.625548    1619 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0818 11:38:47.625721    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.625785    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.625789    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.625827    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:47.625827    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.625849    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.625882    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.625891    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.625949    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.625982    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.626008    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:47.626025    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.626041    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.626052    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.626169    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.626195    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.626214    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:47.626235    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.626261    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.626439    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.626453    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.626437    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.626618    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.627331    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.627399    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:47.627471    1619 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 11:38:47.627479    1619 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 11:38:47.627488    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.627619    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.646703    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.682996    1619 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0818 11:38:47.683198    1619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 11:38:47.683490    1619 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0818 11:38:47.683279    1619 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 11:38:47.704613    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0818 11:38:47.704628    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.683625    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.698667    1619 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0818 11:38:47.704691    1619 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0818 11:38:47.704384    1619 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0818 11:38:47.704793    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.704815    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.715818    1619 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0818 11:38:47.741498    1619 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0818 11:38:47.724305    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0818 11:38:47.737331    1619 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0818 11:38:47.741139    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0818 11:38:47.741555    1619 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0818 11:38:47.741651    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.757824    1619 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0818 11:38:47.778682    1619 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0818 11:38:47.773850    1619 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0818 11:38:47.778706    1619 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0818 11:38:47.778313    1619 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0818 11:38:47.778856    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.795407    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 11:38:47.797118    1619 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 11:38:47.815779    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0818 11:38:47.802361    1619 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0818 11:38:47.815808    1619 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0818 11:38:47.813170    1619 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 11:38:47.815827    1619 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0818 11:38:47.815862    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.818662    1619 node_ready.go:49] node "addons-103000" has status "Ready":"True"
	I0818 11:38:47.836463    1619 node_ready.go:38] duration metric: took 358.953924ms for node "addons-103000" to be "Ready" ...
	I0818 11:38:47.873779    1619 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 11:38:47.848488    1619 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0818 11:38:47.857234    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0818 11:38:47.873825    1619 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0818 11:38:47.873843    1619 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0818 11:38:47.873418    1619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 11:38:47.881879    1619 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0818 11:38:47.910728    1619 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0818 11:38:47.883412    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 11:38:47.902441    1619 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0818 11:38:47.910805    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0818 11:38:47.902628    1619 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0818 11:38:47.910831    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0818 11:38:47.910472    1619 out.go:177]   - Using image docker.io/busybox:stable
	I0818 11:38:47.968585    1619 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 11:38:47.913347    1619 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 11:38:47.968602    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0818 11:38:47.968611    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0818 11:38:47.968627    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:47.924597    1619 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 11:38:47.968665    1619 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 11:38:47.931142    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0818 11:38:47.951790    1619 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0818 11:38:47.968796    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:47.980644    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 11:38:47.989281    1619 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0818 11:38:47.989534    1619 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0818 11:38:47.989704    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:47.989830    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:47.989950    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:47.999524    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0818 11:38:48.002882    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0818 11:38:48.026372    1619 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 11:38:48.026388    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0818 11:38:48.026405    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:48.026571    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:48.026713    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:48.026826    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:48.026927    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:48.043596    1619 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 11:38:48.043613    1619 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 11:38:48.064018    1619 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0818 11:38:48.064033    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0818 11:38:48.064046    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:48.064227    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:48.064332    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:48.064440    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:48.064551    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	W0818 11:38:48.070089    1619 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-103000" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0818 11:38:48.070102    1619 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0818 11:38:48.084351    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0818 11:38:48.085905    1619 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:48.103565    1619 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0818 11:38:48.103584    1619 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0818 11:38:48.107882    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 11:38:48.138393    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 11:38:48.142172    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0818 11:38:48.202179    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0818 11:38:48.207818    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 11:38:48.255797    1619 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0818 11:38:48.255811    1619 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0818 11:38:48.276177    1619 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0818 11:38:48.313073    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0818 11:38:48.313092    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0818 11:38:48.313107    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:48.313272    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:48.313394    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:48.313490    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:48.313582    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:48.464562    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:48.464575    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:48.464732    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:48.464742    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:48.464750    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:48.464756    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:48.464760    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:48.464931    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:48.464940    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:48.464946    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:48.517185    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 11:38:48.592327    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0818 11:38:48.600995    1619 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0818 11:38:48.601006    1619 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0818 11:38:48.657249    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 11:38:48.953512    1619 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 11:38:48.953524    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0818 11:38:48.993376    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0818 11:38:48.993388    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0818 11:38:49.227090    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 11:38:49.477966    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.73644885s)
	I0818 11:38:49.477995    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:49.478002    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:49.478158    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:49.478162    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:49.478170    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:49.478185    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:49.478191    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:49.478328    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:49.478329    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:49.478350    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:49.581862    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0818 11:38:49.581876    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0818 11:38:49.922488    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0818 11:38:49.922500    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0818 11:38:50.091951    1619 pod_ready.go:103] pod "etcd-addons-103000" in "kube-system" namespace has status "Ready":"False"
	I0818 11:38:50.263670    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0818 11:38:50.263684    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0818 11:38:50.752250    1619 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0818 11:38:50.752265    1619 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0818 11:38:50.849164    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0818 11:38:50.849177    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0818 11:38:51.090773    1619 pod_ready.go:93] pod "etcd-addons-103000" in "kube-system" namespace has status "Ready":"True"
	I0818 11:38:51.090790    1619 pod_ready.go:82] duration metric: took 3.004888895s for pod "etcd-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:51.090797    1619 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:51.165062    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0818 11:38:51.165076    1619 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0818 11:38:51.242867    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.427122622s)
	I0818 11:38:51.242906    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.242925    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.243090    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.243092    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.243102    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.243120    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.243128    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.243275    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.243280    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.243289    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.424408    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0818 11:38:51.424421    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0818 11:38:51.516689    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.527176852s)
	I0818 11:38:51.516723    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.516730    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.516854    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.606109266s)
	I0818 11:38:51.516874    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.516882    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.516884    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.516903    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.516908    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.516930    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.516939    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.517062    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.517079    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.517086    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.517106    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.517123    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.517134    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.517138    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.517147    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.517336    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.517348    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.517348    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.572651    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:51.572664    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:51.572819    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:51.572827    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:51.572835    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:51.977013    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0818 11:38:51.977025    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0818 11:38:52.012615    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.009737051s)
	I0818 11:38:52.012646    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:52.012654    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:52.012767    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.013253844s)
	I0818 11:38:52.012787    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:52.012802    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:52.012827    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:52.012832    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:52.012838    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:52.012854    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:52.012862    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:52.012995    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:52.013006    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:52.013017    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:52.013023    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:52.013024    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:52.013098    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:52.013109    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:52.013119    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:52.013130    1619 addons.go:475] Verifying addon registry=true in "addons-103000"
	I0818 11:38:52.013181    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:52.013192    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:52.037765    1619 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-103000 service yakd-dashboard -n yakd-dashboard
	
	I0818 11:38:52.037765    1619 out.go:177] * Verifying registry addon...
	I0818 11:38:52.059496    1619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0818 11:38:52.068985    1619 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0818 11:38:52.068998    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:52.090627    1619 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 11:38:52.090641    1619 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0818 11:38:52.281229    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 11:38:52.579724    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:53.063188    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:53.097676    1619 pod_ready.go:103] pod "kube-apiserver-addons-103000" in "kube-system" namespace has status "Ready":"False"
	I0818 11:38:53.654080    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:54.019071    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.911206483s)
	I0818 11:38:54.019091    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.880715797s)
	W0818 11:38:54.019102    1619 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 11:38:54.019113    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:54.019121    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:54.019121    1619 retry.go:31] will retry after 361.221848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 11:38:54.019165    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.811370263s)
	I0818 11:38:54.019184    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:54.019193    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:54.019282    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:54.019293    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:54.019315    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:54.019329    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:54.019337    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:54.019356    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:54.019367    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:54.019374    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:54.019377    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:54.019382    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:54.019519    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:54.019532    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:54.019540    1619 addons.go:475] Verifying addon metrics-server=true in "addons-103000"
	I0818 11:38:54.019539    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:54.019558    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:54.019569    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:54.080066    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:54.138745    1619 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0818 11:38:54.138766    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:54.138966    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:54.139073    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:54.139180    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:54.139266    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:54.380534    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 11:38:54.420052    1619 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0818 11:38:54.562552    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:54.576257    1619 addons.go:234] Setting addon gcp-auth=true in "addons-103000"
	I0818 11:38:54.576294    1619 host.go:66] Checking if "addons-103000" exists ...
	I0818 11:38:54.576584    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:54.576609    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:54.586521    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49654
	I0818 11:38:54.586896    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:54.587317    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:54.587344    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:54.587592    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:54.587997    1619 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:38:54.588030    1619 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:38:54.597240    1619 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:49656
	I0818 11:38:54.597541    1619 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:38:54.597866    1619 main.go:141] libmachine: Using API Version  1
	I0818 11:38:54.597878    1619 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:38:54.598101    1619 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:38:54.598213    1619 main.go:141] libmachine: (addons-103000) Calling .GetState
	I0818 11:38:54.598300    1619 main.go:141] libmachine: (addons-103000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 11:38:54.598386    1619 main.go:141] libmachine: (addons-103000) DBG | hyperkit pid from json: 1632
	I0818 11:38:54.599341    1619 main.go:141] libmachine: (addons-103000) Calling .DriverName
	I0818 11:38:54.599505    1619 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0818 11:38:54.599516    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHHostname
	I0818 11:38:54.599586    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHPort
	I0818 11:38:54.599675    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHKeyPath
	I0818 11:38:54.599759    1619 main.go:141] libmachine: (addons-103000) Calling .GetSSHUsername
	I0818 11:38:54.599833    1619 sshutil.go:53] new ssh client: &{IP:192.169.0.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/addons-103000/id_rsa Username:docker}
	I0818 11:38:55.068256    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:55.108141    1619 pod_ready.go:103] pod "kube-apiserver-addons-103000" in "kube-system" namespace has status "Ready":"False"
	I0818 11:38:55.294699    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.77754108s)
	I0818 11:38:55.294732    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:55.294745    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:55.294936    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:55.294944    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:55.294953    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:55.294957    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:55.295096    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:55.295107    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:55.295119    1619 addons.go:475] Verifying addon ingress=true in "addons-103000"
	I0818 11:38:55.319753    1619 out.go:177] * Verifying ingress addon...
	I0818 11:38:55.341258    1619 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0818 11:38:55.354326    1619 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0818 11:38:55.354338    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:55.562548    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:55.848234    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:56.084192    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:56.115097    1619 pod_ready.go:93] pod "kube-apiserver-addons-103000" in "kube-system" namespace has status "Ready":"True"
	I0818 11:38:56.115111    1619 pod_ready.go:82] duration metric: took 5.024341313s for pod "kube-apiserver-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.115120    1619 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.129536    1619 pod_ready.go:93] pod "kube-controller-manager-addons-103000" in "kube-system" namespace has status "Ready":"True"
	I0818 11:38:56.129550    1619 pod_ready.go:82] duration metric: took 14.424986ms for pod "kube-controller-manager-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.129557    1619 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rfzrs" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.142678    1619 pod_ready.go:93] pod "kube-proxy-rfzrs" in "kube-system" namespace has status "Ready":"True"
	I0818 11:38:56.142692    1619 pod_ready.go:82] duration metric: took 13.129255ms for pod "kube-proxy-rfzrs" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.142699    1619 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.158215    1619 pod_ready.go:93] pod "kube-scheduler-addons-103000" in "kube-system" namespace has status "Ready":"True"
	I0818 11:38:56.158229    1619 pod_ready.go:82] duration metric: took 15.525758ms for pod "kube-scheduler-addons-103000" in "kube-system" namespace to be "Ready" ...
	I0818 11:38:56.158236    1619 pod_ready.go:39] duration metric: took 8.284502277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 11:38:56.158256    1619 api_server.go:52] waiting for apiserver process to appear ...
	I0818 11:38:56.158311    1619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 11:38:56.355046    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:56.576736    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:56.930165    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:57.077833    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:57.360867    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:57.453141    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.795923254s)
	I0818 11:38:57.453187    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453192    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.22613333s)
	I0818 11:38:57.453201    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453212    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453225    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453286    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.861001127s)
	I0818 11:38:57.453311    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453320    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453373    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:57.453423    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:57.453431    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453434    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453448    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.453452    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.453458    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453461    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453464    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453501    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453542    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:57.453586    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453610    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.453621    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.453646    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.453708    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453722    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.453732    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453744    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:57.453753    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.453861    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.453887    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.502130    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:57.502143    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:57.502275    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:57.502283    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:57.502283    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:57.653185    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:57.856344    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:58.079904    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:58.110105    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.828875108s)
	I0818 11:38:58.110135    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:58.110144    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:58.110147    1619 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.729589388s)
	I0818 11:38:58.110165    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:58.110176    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:58.110178    1619 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.510686906s)
	I0818 11:38:58.110200    1619 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.951891949s)
	I0818 11:38:58.110214    1619 api_server.go:72] duration metric: took 11.276392278s to wait for apiserver process to appear ...
	I0818 11:38:58.110220    1619 api_server.go:88] waiting for apiserver healthz status ...
	I0818 11:38:58.110317    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:58.110326    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:58.110236    1619 api_server.go:253] Checking apiserver healthz at https://192.169.0.2:8443/healthz ...
	I0818 11:38:58.110335    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:58.110342    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:58.110352    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:58.110386    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:58.110395    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:58.110401    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:58.110406    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:58.110412    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:58.110545    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:58.110550    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:58.110554    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:58.110567    1619 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-103000"
	I0818 11:38:58.110580    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:58.110587    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:58.115067    1619 api_server.go:279] https://192.169.0.2:8443/healthz returned 200:
	ok
	I0818 11:38:58.116763    1619 api_server.go:141] control plane version: v1.31.0
	I0818 11:38:58.116775    1619 api_server.go:131] duration metric: took 6.549563ms to wait for apiserver health ...
	I0818 11:38:58.116783    1619 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 11:38:58.148727    1619 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0818 11:38:58.157124    1619 system_pods.go:59] 19 kube-system pods found
	I0818 11:38:58.170351    1619 system_pods.go:61] "coredns-6f6b679f8f-2j78q" [7faddd85-0ae9-46c3-9e07-7a9e020fb5ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 11:38:58.170375    1619 system_pods.go:61] "coredns-6f6b679f8f-fvl94" [1381c864-3490-4f10-a81e-489a86be59a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 11:38:58.170383    1619 system_pods.go:61] "csi-hostpath-attacher-0" [6dba9420-943d-441c-a306-4c77c84bd138] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 11:38:58.170389    1619 system_pods.go:61] "csi-hostpath-resizer-0" [dec80c06-9e48-437d-b25f-b02a98587f9a] Pending
	I0818 11:38:58.170398    1619 system_pods.go:61] "csi-hostpathplugin-sc569" [9babc244-8452-4381-af50-0abb2c34e184] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 11:38:58.170403    1619 system_pods.go:61] "etcd-addons-103000" [999cb82d-3f37-43c5-ab40-c8d6176d4572] Running
	I0818 11:38:58.170407    1619 system_pods.go:61] "kube-apiserver-addons-103000" [60442f8f-a99d-4834-9329-5b4bb14ce6db] Running
	I0818 11:38:58.170410    1619 system_pods.go:61] "kube-controller-manager-addons-103000" [607652bb-9b82-4629-a6ae-828c7157c24d] Running
	I0818 11:38:58.170419    1619 system_pods.go:61] "kube-ingress-dns-minikube" [5b5401f3-38e9-4517-873f-51e8d436195f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 11:38:58.170423    1619 system_pods.go:61] "kube-proxy-rfzrs" [6f10c35f-83fb-4572-aaea-7f119ef103d6] Running
	I0818 11:38:58.170426    1619 system_pods.go:61] "kube-scheduler-addons-103000" [b47d4763-0739-473e-a570-c159c509cc27] Running
	I0818 11:38:58.170430    1619 system_pods.go:61] "metrics-server-8988944d9-w86gd" [c0e788e3-9dc8-424b-bfef-4756401a662e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 11:38:58.170438    1619 system_pods.go:61] "nvidia-device-plugin-daemonset-gkn4q" [b7ad2759-6346-4725-ba80-628779b23d51] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 11:38:58.170443    1619 system_pods.go:61] "registry-6fb4cdfc84-mg7mv" [9759a33e-5cfb-46e6-af39-b7afdc6ae4ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 11:38:58.170447    1619 system_pods.go:61] "registry-proxy-8jjw4" [b82f8fbe-a074-466b-9e7f-b0a4ce9248ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 11:38:58.170452    1619 system_pods.go:61] "snapshot-controller-56fcc65765-vgjn9" [da152b75-eb7c-43c0-991f-b851dd35b584] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 11:38:58.170458    1619 system_pods.go:61] "snapshot-controller-56fcc65765-zplw5" [53c5021e-d618-42ed-a127-9143c17d7420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 11:38:58.170462    1619 system_pods.go:61] "storage-provisioner" [d2fef2c2-c9bd-47e1-9945-fdfe12d17610] Running
	I0818 11:38:58.170466    1619 system_pods.go:61] "tiller-deploy-b48cc5f79-swqh8" [207e865c-151f-454b-8fca-434a9300d93a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 11:38:58.170473    1619 system_pods.go:74] duration metric: took 53.684952ms to wait for pod list to return data ...
	I0818 11:38:58.170479    1619 default_sa.go:34] waiting for default service account to be created ...
	I0818 11:38:58.222913    1619 out.go:177] * Verifying csi-hostpath-driver addon...
	I0818 11:38:58.266504    1619 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 11:38:58.267714    1619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0818 11:38:58.287798    1619 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0818 11:38:58.287817    1619 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0818 11:38:58.310603    1619 default_sa.go:45] found service account: "default"
	I0818 11:38:58.310618    1619 default_sa.go:55] duration metric: took 140.134855ms for default service account to be created ...
	I0818 11:38:58.310624    1619 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 11:38:58.313441    1619 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0818 11:38:58.313451    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:38:58.331762    1619 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0818 11:38:58.331776    1619 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0818 11:38:58.361738    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:58.361891    1619 system_pods.go:86] 19 kube-system pods found
	I0818 11:38:58.361906    1619 system_pods.go:89] "coredns-6f6b679f8f-2j78q" [7faddd85-0ae9-46c3-9e07-7a9e020fb5ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 11:38:58.361914    1619 system_pods.go:89] "coredns-6f6b679f8f-fvl94" [1381c864-3490-4f10-a81e-489a86be59a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 11:38:58.361920    1619 system_pods.go:89] "csi-hostpath-attacher-0" [6dba9420-943d-441c-a306-4c77c84bd138] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 11:38:58.361924    1619 system_pods.go:89] "csi-hostpath-resizer-0" [dec80c06-9e48-437d-b25f-b02a98587f9a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 11:38:58.361928    1619 system_pods.go:89] "csi-hostpathplugin-sc569" [9babc244-8452-4381-af50-0abb2c34e184] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 11:38:58.361931    1619 system_pods.go:89] "etcd-addons-103000" [999cb82d-3f37-43c5-ab40-c8d6176d4572] Running
	I0818 11:38:58.361934    1619 system_pods.go:89] "kube-apiserver-addons-103000" [60442f8f-a99d-4834-9329-5b4bb14ce6db] Running
	I0818 11:38:58.361937    1619 system_pods.go:89] "kube-controller-manager-addons-103000" [607652bb-9b82-4629-a6ae-828c7157c24d] Running
	I0818 11:38:58.361941    1619 system_pods.go:89] "kube-ingress-dns-minikube" [5b5401f3-38e9-4517-873f-51e8d436195f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 11:38:58.361944    1619 system_pods.go:89] "kube-proxy-rfzrs" [6f10c35f-83fb-4572-aaea-7f119ef103d6] Running
	I0818 11:38:58.361947    1619 system_pods.go:89] "kube-scheduler-addons-103000" [b47d4763-0739-473e-a570-c159c509cc27] Running
	I0818 11:38:58.361951    1619 system_pods.go:89] "metrics-server-8988944d9-w86gd" [c0e788e3-9dc8-424b-bfef-4756401a662e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 11:38:58.361973    1619 system_pods.go:89] "nvidia-device-plugin-daemonset-gkn4q" [b7ad2759-6346-4725-ba80-628779b23d51] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 11:38:58.361987    1619 system_pods.go:89] "registry-6fb4cdfc84-mg7mv" [9759a33e-5cfb-46e6-af39-b7afdc6ae4ed] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 11:38:58.361992    1619 system_pods.go:89] "registry-proxy-8jjw4" [b82f8fbe-a074-466b-9e7f-b0a4ce9248ae] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 11:38:58.361997    1619 system_pods.go:89] "snapshot-controller-56fcc65765-vgjn9" [da152b75-eb7c-43c0-991f-b851dd35b584] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 11:38:58.362001    1619 system_pods.go:89] "snapshot-controller-56fcc65765-zplw5" [53c5021e-d618-42ed-a127-9143c17d7420] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 11:38:58.362012    1619 system_pods.go:89] "storage-provisioner" [d2fef2c2-c9bd-47e1-9945-fdfe12d17610] Running
	I0818 11:38:58.362020    1619 system_pods.go:89] "tiller-deploy-b48cc5f79-swqh8" [207e865c-151f-454b-8fca-434a9300d93a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 11:38:58.362026    1619 system_pods.go:126] duration metric: took 51.397815ms to wait for k8s-apps to be running ...
	I0818 11:38:58.362034    1619 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 11:38:58.362085    1619 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 11:38:58.391093    1619 system_svc.go:56] duration metric: took 29.056775ms WaitForService to wait for kubelet
	I0818 11:38:58.391109    1619 kubeadm.go:582] duration metric: took 11.557289406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 11:38:58.391121    1619 node_conditions.go:102] verifying NodePressure condition ...
	I0818 11:38:58.400297    1619 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 11:38:58.400312    1619 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0818 11:38:58.417549    1619 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 11:38:58.417568    1619 node_conditions.go:123] node cpu capacity is 2
	I0818 11:38:58.417576    1619 node_conditions.go:105] duration metric: took 26.452092ms to run NodePressure ...
	I0818 11:38:58.417585    1619 start.go:241] waiting for startup goroutines ...
	I0818 11:38:58.447074    1619 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 11:38:58.562086    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:58.774050    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:38:58.843764    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:59.071891    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:59.110316    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:59.110339    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:59.110587    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:59.110593    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:59.110601    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:59.110611    1619 main.go:141] libmachine: Making call to close driver server
	I0818 11:38:59.110617    1619 main.go:141] libmachine: (addons-103000) Calling .Close
	I0818 11:38:59.110765    1619 main.go:141] libmachine: Successfully made call to close driver server
	I0818 11:38:59.110776    1619 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 11:38:59.110786    1619 main.go:141] libmachine: (addons-103000) DBG | Closing plugin on server side
	I0818 11:38:59.111737    1619 addons.go:475] Verifying addon gcp-auth=true in "addons-103000"
	I0818 11:38:59.136414    1619 out.go:177] * Verifying gcp-auth addon...
	I0818 11:38:59.193179    1619 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0818 11:38:59.196819    1619 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 11:38:59.299368    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:38:59.398578    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:38:59.563030    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:38:59.772803    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:38:59.844966    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:00.062513    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:00.298129    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:00.344003    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:00.562133    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:00.771268    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:00.844157    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:01.062288    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:01.272258    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:01.344151    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:01.563023    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:01.771491    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:01.843592    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:02.062788    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:02.271522    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:02.344709    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:02.562233    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:02.771963    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:02.879546    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:03.063118    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:03.272188    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:03.345247    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:03.562333    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:03.772778    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:03.843999    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:04.062454    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:04.273618    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:04.344114    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:04.562373    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:04.771582    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:04.843886    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:05.062005    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:05.273022    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:05.343816    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:05.562915    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:05.770675    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:05.845335    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:06.062377    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:06.273934    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:06.344909    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:06.562433    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:06.771738    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:06.843863    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:07.063746    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:07.271666    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:07.344167    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:07.563378    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:07.771913    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:07.844253    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:08.064890    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:08.271420    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:08.343996    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:08.562454    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:08.771687    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:08.844607    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:09.061883    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:09.271425    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:09.344065    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:09.623209    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:09.774170    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:09.844779    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:10.063999    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:10.272058    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:10.344924    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:10.563878    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:10.772268    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:10.844022    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:11.063555    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:11.271223    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:11.344430    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:11.562843    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:11.771192    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:11.844402    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:12.062607    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:12.272195    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:12.344165    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:12.562241    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:12.771418    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:12.844054    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:13.082272    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:13.272785    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:13.343990    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:13.562447    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:13.771632    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:13.843714    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:14.062452    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:14.271905    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:14.344985    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:14.561974    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:14.772436    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:14.843771    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:15.062523    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:15.271809    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:15.344112    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:15.561452    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:15.771652    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:15.843620    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:16.062112    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:16.270425    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:16.344260    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:16.587321    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:16.771262    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:16.844009    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:17.062424    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:17.272182    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:17.393867    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:17.564772    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:17.772256    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:17.843834    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:18.063401    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:18.273840    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:18.345952    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:18.562084    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:18.770725    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:18.844964    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:19.062051    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:19.270208    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:19.344687    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:19.562603    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:19.771272    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:19.844476    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:20.062095    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:20.271782    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:20.343891    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:20.562011    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:20.771220    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:20.845714    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:21.061908    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:21.271210    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:21.344021    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:21.561825    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:21.851439    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:21.851587    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:22.063303    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:22.274177    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:22.345473    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:22.561974    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:22.770720    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:22.843988    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:23.062740    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:23.271636    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:23.343964    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:23.561407    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:23.771286    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:23.843782    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:24.061823    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:24.275935    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:24.345864    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:24.564106    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:24.770274    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:24.845332    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:25.061389    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:25.271735    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:25.345204    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:25.562004    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:25.772932    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:25.845058    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:26.061802    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:26.273961    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:26.345072    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:26.562718    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:26.770454    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:26.844049    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:27.063496    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:27.271749    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:27.343801    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:27.563540    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:27.799605    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:27.843976    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:28.061498    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:28.270304    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:28.346084    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:28.562911    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:28.771408    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:28.844300    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:29.062228    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:29.271091    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:29.343812    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:29.562300    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:29.798592    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:29.900194    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:30.061952    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:30.272360    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:30.343978    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:30.563111    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:30.772426    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:30.872287    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:31.062680    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:31.271825    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:31.344118    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:31.562376    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:31.771345    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:31.844357    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:32.063032    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:32.271829    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:32.343714    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:32.561812    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:32.772306    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:32.844155    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:33.061594    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:33.271362    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:33.343712    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:33.561667    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:33.771331    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:33.843681    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:34.061345    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:34.271369    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:34.343523    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:34.652285    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:34.771902    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:34.843932    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:35.062659    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:35.270759    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:35.370633    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:35.563246    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 11:39:35.771947    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:35.844655    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:36.062851    1619 kapi.go:107] duration metric: took 44.003662042s to wait for kubernetes.io/minikube-addons=registry ...
	I0818 11:39:36.271501    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:36.344219    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:36.771163    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:36.843970    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:37.297069    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:37.343592    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:37.773352    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:37.844555    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:38.270763    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:38.345058    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:38.771032    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:38.844538    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:39.271524    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:39.343521    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:39.771053    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:39.843578    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:40.270376    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:40.343641    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:40.770076    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:40.843834    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:41.272053    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:41.343686    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:41.770115    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:41.843483    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:42.271837    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:42.344285    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:42.770659    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:42.843779    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:43.271544    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:43.343458    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:43.771249    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:43.844085    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:44.272509    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:44.343825    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:44.770309    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:44.843512    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:45.270376    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:45.371703    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:45.772220    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:45.845816    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:46.270404    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:46.343390    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:46.770621    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:46.843650    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:47.270970    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:47.343552    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:47.798665    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:47.897700    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:48.272663    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:48.343608    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:48.772367    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:48.844236    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:49.270499    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:49.344066    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:49.772402    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:49.843982    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:50.272032    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:50.343292    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:50.772074    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:50.843581    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:51.365700    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:51.365897    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:51.798318    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:51.845623    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:52.271326    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:52.347020    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:52.771634    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:52.844166    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:53.286800    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:53.343388    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:53.775722    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:53.845509    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:54.273101    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:54.343958    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:54.772041    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:54.843884    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:55.270836    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:55.343431    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:55.770344    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:55.843449    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:56.271420    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:56.345032    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:56.803325    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:56.845321    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:57.271598    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:57.343520    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:57.770289    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:57.843536    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:58.270281    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:58.343859    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:58.772226    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:58.845187    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:59.272029    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:59.344082    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:39:59.797622    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:39:59.898855    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:00.272635    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:00.344882    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:00.799118    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:00.898403    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:01.271185    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:01.344139    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:01.770975    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:01.844854    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:02.270249    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:02.344515    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:02.799124    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:02.846086    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:03.270395    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:03.343924    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:03.770887    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:03.843733    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:04.271715    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:04.343663    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:04.772327    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:04.872783    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:05.270346    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:05.344266    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:05.770822    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:05.845237    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:06.270759    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:06.344771    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:06.770534    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:06.843360    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:07.271919    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:07.344467    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:07.770524    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:07.843756    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:08.270841    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:08.343290    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:08.769966    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:08.843783    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:09.271252    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:09.343666    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:09.770972    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:09.843454    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:10.270195    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:10.344428    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:10.798136    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:10.899005    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:11.270243    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:11.344809    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:11.770552    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:11.843713    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:12.270871    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:12.343374    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:12.825643    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:12.843487    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:13.271787    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:13.343258    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:13.770881    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:13.843544    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:14.270716    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:14.343538    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:14.771971    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:14.980220    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:15.270360    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:15.343349    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:15.797919    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:15.845289    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:16.271665    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:16.343661    1619 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 11:40:16.770970    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:16.844597    1619 kapi.go:107] duration metric: took 1m21.503909962s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0818 11:40:17.306682    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:17.771134    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:18.273128    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:18.770529    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:19.297865    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:19.770886    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:20.272301    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:20.771152    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:21.271759    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:21.771216    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:22.270573    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:22.770257    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:23.271152    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:23.773070    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:24.271160    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:24.770532    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:25.270151    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:25.770256    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:26.272932    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:26.772172    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 11:40:27.271274    1619 kapi.go:107] duration metric: took 1m29.004186554s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0818 11:41:44.197521    1619 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 11:41:44.197534    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:44.694555    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:45.195253    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:45.695399    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:46.194640    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:46.693984    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:47.196267    1619 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 11:41:47.696401    1619 kapi.go:107] duration metric: took 2m48.504404476s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0818 11:41:47.722092    1619 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-103000 cluster.
	I0818 11:41:47.743988    1619 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0818 11:41:47.803210    1619 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0818 11:41:47.830297    1619 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, storage-provisioner, helm-tiller, default-storageclass, yakd, metrics-server, nvidia-device-plugin, inspektor-gadget, volcano, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0818 11:41:47.851173    1619 addons.go:510] duration metric: took 3m1.018514243s for enable addons: enabled=[ingress-dns cloud-spanner storage-provisioner helm-tiller default-storageclass yakd metrics-server nvidia-device-plugin inspektor-gadget volcano storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0818 11:41:47.851242    1619 start.go:246] waiting for cluster config update ...
	I0818 11:41:47.851276    1619 start.go:255] writing updated cluster config ...
	I0818 11:41:47.891077    1619 ssh_runner.go:195] Run: rm -f paused
	I0818 11:41:47.939681    1619 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0818 11:41:47.978259    1619 out.go:201] 
	W0818 11:41:47.999247    1619 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0818 11:41:48.020048    1619 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0818 11:41:48.062257    1619 out.go:177] * Done! kubectl is now configured to use "addons-103000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 18 18:41:35 addons-103000 dockerd[1274]: time="2024-08-18T18:41:35.170338461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:41:35 addons-103000 dockerd[1268]: time="2024-08-18T18:41:35.908506055Z" level=info msg="ignoring event" container=e8cd02c9491a276ff96d292ff036dcc2b7a965f29e9e3da0e67d376544fd6ba2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:41:35 addons-103000 dockerd[1274]: time="2024-08-18T18:41:35.908655918Z" level=info msg="shim disconnected" id=e8cd02c9491a276ff96d292ff036dcc2b7a965f29e9e3da0e67d376544fd6ba2 namespace=moby
	Aug 18 18:41:35 addons-103000 dockerd[1274]: time="2024-08-18T18:41:35.908687552Z" level=warning msg="cleaning up after shim disconnected" id=e8cd02c9491a276ff96d292ff036dcc2b7a965f29e9e3da0e67d376544fd6ba2 namespace=moby
	Aug 18 18:41:35 addons-103000 dockerd[1274]: time="2024-08-18T18:41:35.908693841Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:41:44 addons-103000 dockerd[1274]: time="2024-08-18T18:41:44.372358512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:41:44 addons-103000 dockerd[1274]: time="2024-08-18T18:41:44.372652882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:41:44 addons-103000 dockerd[1274]: time="2024-08-18T18:41:44.373006845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:41:44 addons-103000 dockerd[1274]: time="2024-08-18T18:41:44.373282411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:41:44 addons-103000 cri-dockerd[1165]: time="2024-08-18T18:41:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/637d2df516ddf8b4e5314e46609b15d8347d78a5e446e38fb2d86e3757304304/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 18 18:41:44 addons-103000 dockerd[1268]: time="2024-08-18T18:41:44.594037770Z" level=warning msg="reference for unknown type: " digest="sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Aug 18 18:41:46 addons-103000 cri-dockerd[1165]: time="2024-08-18T18:41:46Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb"
	Aug 18 18:41:47 addons-103000 dockerd[1274]: time="2024-08-18T18:41:47.031989360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:41:47 addons-103000 dockerd[1274]: time="2024-08-18T18:41:47.032952213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:41:47 addons-103000 dockerd[1274]: time="2024-08-18T18:41:47.032988198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:41:47 addons-103000 dockerd[1274]: time="2024-08-18T18:41:47.033145173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:42:59 addons-103000 cri-dockerd[1165]: time="2024-08-18T18:42:59Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc"
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.179590041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.179930796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.180506100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.181316178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.912654982Z" level=info msg="shim disconnected" id=381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53 namespace=moby
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.912986335Z" level=warning msg="cleaning up after shim disconnected" id=381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53 namespace=moby
	Aug 18 18:42:59 addons-103000 dockerd[1274]: time="2024-08-18T18:42:59.913031621Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:42:59 addons-103000 dockerd[1268]: time="2024-08-18T18:42:59.913194469Z" level=info msg="ignoring event" container=381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	381ea22f13544       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc                            2 minutes ago       Exited              gadget                                   5                   114c564baf22b       gadget-bc29t
	1aad7ed876b84       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 3 minutes ago       Running             gcp-auth                                 0                   637d2df516ddf       gcp-auth-89d5ffd79-d9rxr
	759bee40cae3d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago       Running             csi-snapshotter                          0                   cc579865138bb       csi-hostpathplugin-sc569
	543fb6ec32a49       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago       Running             csi-provisioner                          0                   cc579865138bb       csi-hostpathplugin-sc569
	aa5d7850e0c29       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago       Running             liveness-probe                           0                   cc579865138bb       csi-hostpathplugin-sc569
	7e834782f0814       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago       Running             hostpath                                 0                   cc579865138bb       csi-hostpathplugin-sc569
	d52746ff4c54a       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         4 minutes ago       Running             admission                                0                   36ac5858bb032       volcano-admission-77d7d48b68-gwkn5
	da8f1c60168b3       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             4 minutes ago       Running             controller                               0                   87b71ab4de298       ingress-nginx-controller-bc57996ff-s5rhl
	775b2410c9acc       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago       Running             node-driver-registrar                    0                   cc579865138bb       csi-hostpathplugin-sc569
	5561004cfc362       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             5 minutes ago       Running             csi-attacher                             0                   83780304347dd       csi-hostpath-attacher-0
	bdae03c25266d       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               5 minutes ago       Running             volcano-scheduler                        0                   1210bd1b372fa       volcano-scheduler-576bc46687-2vk2m
	f1e8a79cf4d5a       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              5 minutes ago       Running             csi-resizer                              0                   a7e79533bf643       csi-hostpath-resizer-0
	f8ab062b69739       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   5 minutes ago       Running             csi-external-health-monitor-controller   0                   cc579865138bb       csi-hostpathplugin-sc569
	ed870c86d4ede       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      5 minutes ago       Running             volcano-controllers                      0                   9762c11182edc       volcano-controllers-56675bb4d5-9jmqr
	f26f3ce101d02       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   5 minutes ago       Exited              patch                                    0                   8f551baa12413       ingress-nginx-admission-patch-vm7d7
	a227dc3561307       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   5 minutes ago       Exited              create                                   0                   e2f0ee2e58f71       ingress-nginx-admission-create-bmmvw
	c379006b2261b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago       Running             volume-snapshot-controller               0                   02affb6eaaf59       snapshot-controller-56fcc65765-zplw5
	b4a8111a15210       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago       Running             volume-snapshot-controller               0                   2df902af3a53d       snapshot-controller-56fcc65765-vgjn9
	4558bebf24e20       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       5 minutes ago       Running             local-path-provisioner                   0                   dc4269e4864e6       local-path-provisioner-86d989889c-mxh4z
	c1595e2d751d8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              5 minutes ago       Running             registry-proxy                           0                   97705b60a0679       registry-proxy-8jjw4
	d8870bdfd8d64       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             5 minutes ago       Running             registry                                 0                   b3c87a2646014       registry-6fb4cdfc84-mg7mv
	a1c71aaaf685d       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        5 minutes ago       Running             metrics-server                           0                   59dd6e6d42c0e       metrics-server-8988944d9-w86gd
	2008cd24d517e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  5 minutes ago       Running             tiller                                   0                   94ba8ae32e56c       tiller-deploy-b48cc5f79-swqh8
	7a8d657307ff3       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        5 minutes ago       Running             yakd                                     0                   4c5f8e82676c1       yakd-dashboard-67d98fc6b-24wj5
	f1b55a1a48f54       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   5bf6c38f52397       nvidia-device-plugin-daemonset-gkn4q
	293852b750794       gcr.io/cloud-spanner-emulator/emulator@sha256:ea3a9e70a98bf648952401e964c5403d93e980837acf924288df19e0077ae7fb                               5 minutes ago       Running             cloud-spanner-emulator                   0                   26c4e34c051a8       cloud-spanner-emulator-c4bc9b5f8-tntd2
	bfe29bc01845a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             6 minutes ago       Running             minikube-ingress-dns                     0                   e2f72c5bf9dc7       kube-ingress-dns-minikube
	ed4c8659190e5       6e38f40d628db                                                                                                                                6 minutes ago       Running             storage-provisioner                      0                   966bee8fec334       storage-provisioner
	f83583a63190d       cbb01a7bd410d                                                                                                                                6 minutes ago       Running             coredns                                  0                   f48eb77b4b8cd       coredns-6f6b679f8f-2j78q
	23eead9709cd6       cbb01a7bd410d                                                                                                                                6 minutes ago       Running             coredns                                  0                   4c6a1103d8f69       coredns-6f6b679f8f-fvl94
	765d82e220c34       ad83b2ca7b09e                                                                                                                                6 minutes ago       Running             kube-proxy                               0                   b56fe5999228a       kube-proxy-rfzrs
	acbb0f757559a       2e96e5913fc06                                                                                                                                6 minutes ago       Running             etcd                                     0                   ac9a4b5c01c58       etcd-addons-103000
	b2bb050b1d878       1766f54c897f0                                                                                                                                6 minutes ago       Running             kube-scheduler                           0                   69b7f7ea9b1b6       kube-scheduler-addons-103000
	f8cea5b100202       045733566833c                                                                                                                                6 minutes ago       Running             kube-controller-manager                  0                   d8c4efd9fdd93       kube-controller-manager-addons-103000
	95829f16e5f81       604f5db92eaa8                                                                                                                                6 minutes ago       Running             kube-apiserver                           0                   b242ba14132be       kube-apiserver-addons-103000
	
	
	==> controller_ingress [da8f1c60168b] <==
	W0818 18:40:15.960295       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0818 18:40:15.960508       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0818 18:40:15.964494       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/amd64"
	I0818 18:40:16.050813       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0818 18:40:16.075602       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0818 18:40:16.088544       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0818 18:40:16.101151       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"693a6732-0192-4dbf-ad4b-ddfbdcef03c5", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0818 18:40:16.106451       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"6c24847e-f204-476f-9f66-ba7bc3edad91", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0818 18:40:16.106492       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c634e74f-90d9-4494-8905-22bcfc7458bd", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0818 18:40:17.290449       7 nginx.go:317] "Starting NGINX process"
	I0818 18:40:17.290698       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0818 18:40:17.290959       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0818 18:40:17.291129       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0818 18:40:17.302885       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0818 18:40:17.302995       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-s5rhl"
	I0818 18:40:17.330937       7 controller.go:213] "Backend successfully reloaded"
	I0818 18:40:17.331169       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0818 18:40:17.331390       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-s5rhl", UID:"6db91781-b5ec-47b4-aadb-ac73e867f480", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0818 18:40:17.410340       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-s5rhl" node="addons-103000"
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [23eead9709cd] <==
	[INFO] plugin/kubernetes: Trace[1092765424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 18:38:49.840) (total time: 30001ms):
	Trace[1092765424]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:39:19.841)
	Trace[1092765424]: [30.001368624s] [30.001368624s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[146822620]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 18:38:49.840) (total time: 30001ms):
	Trace[146822620]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:39:19.841)
	Trace[146822620]: [30.001264904s] [30.001264904s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[626628858]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 18:38:49.840) (total time: 30001ms):
	Trace[626628858]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:39:19.842)
	Trace[626628858]: [30.001251898s] [30.001251898s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.10:35395 - 57951 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000154829s
	[INFO] 10.244.0.10:35395 - 30297 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000404103s
	[INFO] 10.244.0.10:49156 - 2486 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147411s
	[INFO] 10.244.0.10:49156 - 8375 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000108419s
	[INFO] 10.244.0.26:53896 - 63500 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223189s
	[INFO] 10.244.0.26:34425 - 2336 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235713s
	[INFO] 10.244.0.26:42633 - 48443 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062585s
	[INFO] 10.244.0.26:41329 - 4617 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074849s
	[INFO] 10.244.0.26:49888 - 54375 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00064482s
	[INFO] 10.244.0.26:36382 - 10815 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001081196s
	
	
	==> coredns [f83583a63190] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[972691817]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 18:38:49.906) (total time: 30001ms):
	Trace[972691817]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:39:19.908)
	Trace[972691817]: [30.001359989s] [30.001359989s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[333663432]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 18:38:49.907) (total time: 30004ms):
	Trace[333663432]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (18:39:19.911)
	Trace[333663432]: [30.004778273s] [30.004778273s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.10:38022 - 24520 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159201s
	[INFO] 10.244.0.10:38022 - 54773 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000049765s
	[INFO] 10.244.0.10:43149 - 46002 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119816s
	[INFO] 10.244.0.10:43149 - 33712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122104s
	[INFO] 10.244.0.10:54271 - 37560 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185799s
	[INFO] 10.244.0.10:54271 - 8870 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000161785s
	[INFO] 10.244.0.10:58921 - 12089 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000101509s
	[INFO] 10.244.0.10:58921 - 50747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000076241s
	[INFO] 10.244.0.10:50432 - 26067 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050178s
	[INFO] 10.244.0.10:50432 - 43756 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107203s
	[INFO] 10.244.0.10:43829 - 14212 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053975s
	[INFO] 10.244.0.10:43829 - 50822 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041458s
	[INFO] 10.244.0.26:43899 - 64036 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164431s
	[INFO] 10.244.0.26:47262 - 33450 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000247039s
	
	
	==> describe nodes <==
	Name:               addons-103000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-103000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=addons-103000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T11_38_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-103000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-103000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:38:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-103000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:44:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:42:16 +0000   Sun, 18 Aug 2024 18:38:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:42:16 +0000   Sun, 18 Aug 2024 18:38:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:42:16 +0000   Sun, 18 Aug 2024 18:38:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:42:16 +0000   Sun, 18 Aug 2024 18:38:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.2
	  Hostname:    addons-103000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 76b8bf03c7c54135931fc4c8c9d9dc14
	  System UUID:                a8b34f88-0000-0000-8d39-79d4089a7b73
	  Boot ID:                    632a69ac-ca8a-40df-9dc8-5444d562105b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-tntd2      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  gadget                      gadget-bc29t                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  gcp-auth                    gcp-auth-89d5ffd79-d9rxr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-s5rhl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         6m10s
	  kube-system                 coredns-6f6b679f8f-2j78q                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m18s
	  kube-system                 coredns-6f6b679f8f-fvl94                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m18s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 csi-hostpathplugin-sc569                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 etcd-addons-103000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m25s
	  kube-system                 kube-apiserver-addons-103000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-addons-103000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-rfzrs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-addons-103000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 metrics-server-8988944d9-w86gd              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         6m13s
	  kube-system                 nvidia-device-plugin-daemonset-gkn4q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 registry-6fb4cdfc84-mg7mv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 registry-proxy-8jjw4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 snapshot-controller-56fcc65765-vgjn9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 snapshot-controller-56fcc65765-zplw5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 tiller-deploy-b48cc5f79-swqh8               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  local-path-storage          local-path-provisioner-86d989889c-mxh4z     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  volcano-system              volcano-admission-77d7d48b68-gwkn5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  volcano-system              volcano-controllers-56675bb4d5-9jmqr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  volcano-system              volcano-scheduler-576bc46687-2vk2m          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-24wj5              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  0 (0%)
	  memory             658Mi (17%)  596Mi (15%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node addons-103000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node addons-103000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node addons-103000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m24s                  kubelet          Node addons-103000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m24s                  kubelet          Node addons-103000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m24s                  kubelet          Node addons-103000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m21s                  kubelet          Node addons-103000 status is now: NodeReady
	  Normal  RegisteredNode           6m19s                  node-controller  Node addons-103000 event: Registered Node addons-103000 in Controller
	
	
	==> dmesg <==
	[  +4.003419] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +2.292197] kauditd_printk_skb: 136 callbacks suppressed
	[  +0.376593] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[  +4.677542] systemd-fstab-generator[1637]: Ignoring "noauto" option for root device
	[  +0.051570] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.472601] systemd-fstab-generator[2041]: Ignoring "noauto" option for root device
	[  +0.094268] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.303243] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.088040] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.072031] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.074887] kauditd_printk_skb: 163 callbacks suppressed
	[Aug18 18:39] kauditd_printk_skb: 65 callbacks suppressed
	[ +24.736107] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.701887] kauditd_printk_skb: 10 callbacks suppressed
	[ +12.427499] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.335413] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.334516] kauditd_printk_skb: 7 callbacks suppressed
	[Aug18 18:40] kauditd_printk_skb: 34 callbacks suppressed
	[ +10.981218] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.112428] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.380216] kauditd_printk_skb: 38 callbacks suppressed
	[Aug18 18:41] kauditd_printk_skb: 28 callbacks suppressed
	[ +32.653029] kauditd_printk_skb: 40 callbacks suppressed
	[  +9.203720] kauditd_printk_skb: 28 callbacks suppressed
	[Aug18 18:42] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [acbb0f757559] <==
	{"level":"info","ts":"2024-08-18T18:38:47.853023Z","caller":"traceutil/trace.go:171","msg":"trace[1965706503] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:366; }","duration":"156.873528ms","start":"2024-08-18T18:38:47.696146Z","end":"2024-08-18T18:38:47.853019Z","steps":["trace[1965706503] 'agreement among raft nodes before linearized reading'  (duration: 156.814362ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:38:47.853130Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.960535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-08-18T18:38:47.853143Z","caller":"traceutil/trace.go:171","msg":"trace[98187772] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:366; }","duration":"154.974833ms","start":"2024-08-18T18:38:47.698165Z","end":"2024-08-18T18:38:47.853140Z","steps":["trace[98187772] 'agreement among raft nodes before linearized reading'  (duration: 154.949792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:38:47.853240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.146533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-addons-103000\" ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2024-08-18T18:38:47.853252Z","caller":"traceutil/trace.go:171","msg":"trace[226253568] range","detail":"{range_begin:/registry/pods/kube-system/etcd-addons-103000; range_end:; response_count:1; response_revision:366; }","duration":"150.159275ms","start":"2024-08-18T18:38:47.703090Z","end":"2024-08-18T18:38:47.853249Z","steps":["trace[226253568] 'agreement among raft nodes before linearized reading'  (duration: 150.136726ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:38:58.336738Z","caller":"traceutil/trace.go:171","msg":"trace[1952502668] transaction","detail":"{read_only:false; response_revision:935; number_of_response:1; }","duration":"138.380454ms","start":"2024-08-18T18:38:58.198347Z","end":"2024-08-18T18:38:58.336727Z","steps":["trace[1952502668] 'process raft request'  (duration: 138.363813ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:38:58.336958Z","caller":"traceutil/trace.go:171","msg":"trace[1326470345] transaction","detail":"{read_only:false; response_revision:932; number_of_response:1; }","duration":"171.98847ms","start":"2024-08-18T18:38:58.164966Z","end":"2024-08-18T18:38:58.336954Z","steps":["trace[1326470345] 'process raft request'  (duration: 170.646283ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:38:58.337013Z","caller":"traceutil/trace.go:171","msg":"trace[539302172] transaction","detail":"{read_only:false; response_revision:933; number_of_response:1; }","duration":"171.981185ms","start":"2024-08-18T18:38:58.165028Z","end":"2024-08-18T18:38:58.337010Z","steps":["trace[539302172] 'process raft request'  (duration: 171.641403ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:38:58.337058Z","caller":"traceutil/trace.go:171","msg":"trace[643909156] transaction","detail":"{read_only:false; response_revision:934; number_of_response:1; }","duration":"150.673938ms","start":"2024-08-18T18:38:58.186379Z","end":"2024-08-18T18:38:58.337053Z","steps":["trace[643909156] 'process raft request'  (duration: 150.314208ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:38:58.337099Z","caller":"traceutil/trace.go:171","msg":"trace[1923130086] linearizableReadLoop","detail":"{readStateIndex:954; appliedIndex:951; }","duration":"142.207861ms","start":"2024-08-18T18:38:58.194887Z","end":"2024-08-18T18:38:58.337095Z","steps":["trace[1923130086] 'read index received'  (duration: 140.699607ms)","trace[1923130086] 'applied index is now lower than readState.Index'  (duration: 1.507857ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T18:38:58.337251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.350244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:1 size:535"}
	{"level":"info","ts":"2024-08-18T18:38:58.337264Z","caller":"traceutil/trace.go:171","msg":"trace[1906517988] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:935; }","duration":"149.370515ms","start":"2024-08-18T18:38:58.187890Z","end":"2024-08-18T18:38:58.337261Z","steps":["trace[1906517988] 'agreement among raft nodes before linearized reading'  (duration: 149.3169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:38:58.337345Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.103723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:2632"}
	{"level":"info","ts":"2024-08-18T18:38:58.337356Z","caller":"traceutil/trace.go:171","msg":"trace[937635923] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:935; }","duration":"134.115771ms","start":"2024-08-18T18:38:58.203237Z","end":"2024-08-18T18:38:58.337353Z","steps":["trace[937635923] 'agreement among raft nodes before linearized reading'  (duration: 134.080904ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:38:58.337536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.482364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:38:58.337547Z","caller":"traceutil/trace.go:171","msg":"trace[949876926] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:935; }","duration":"119.495605ms","start":"2024-08-18T18:38:58.218049Z","end":"2024-08-18T18:38:58.337544Z","steps":["trace[949876926] 'agreement among raft nodes before linearized reading'  (duration: 119.477278ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:39:02.802820Z","caller":"traceutil/trace.go:171","msg":"trace[2003752756] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"109.338407ms","start":"2024-08-18T18:39:02.693353Z","end":"2024-08-18T18:39:02.802691Z","steps":["trace[2003752756] 'process raft request'  (duration: 107.865418ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:39:13.135843Z","caller":"traceutil/trace.go:171","msg":"trace[1388770884] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"164.695728ms","start":"2024-08-18T18:39:12.971138Z","end":"2024-08-18T18:39:13.135834Z","steps":["trace[1388770884] 'process raft request'  (duration: 164.384748ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:39:34.721821Z","caller":"traceutil/trace.go:171","msg":"trace[1208833130] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"148.161311ms","start":"2024-08-18T18:39:34.573650Z","end":"2024-08-18T18:39:34.721812Z","steps":["trace[1208833130] 'process raft request'  (duration: 147.905742ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:40:15.074766Z","caller":"traceutil/trace.go:171","msg":"trace[721707843] linearizableReadLoop","detail":"{readStateIndex:1297; appliedIndex:1296; }","duration":"134.564359ms","start":"2024-08-18T18:40:14.940191Z","end":"2024-08-18T18:40:15.074756Z","steps":["trace[721707843] 'read index received'  (duration: 134.48716ms)","trace[721707843] 'applied index is now lower than readState.Index'  (duration: 76.942µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T18:40:15.074879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-18T18:40:15.074980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.932774ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:40:15.074997Z","caller":"traceutil/trace.go:171","msg":"trace[713912958] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1261; }","duration":"119.949557ms","start":"2024-08-18T18:40:14.955042Z","end":"2024-08-18T18:40:15.074991Z","steps":["trace[713912958] 'agreement among raft nodes before linearized reading'  (duration: 119.92692ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:40:15.074997Z","caller":"traceutil/trace.go:171","msg":"trace[1629578412] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1261; }","duration":"134.762788ms","start":"2024-08-18T18:40:14.940188Z","end":"2024-08-18T18:40:15.074951Z","steps":["trace[1629578412] 'agreement among raft nodes before linearized reading'  (duration: 134.648727ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:40:15.075332Z","caller":"traceutil/trace.go:171","msg":"trace[1801954143] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"150.035224ms","start":"2024-08-18T18:40:14.925292Z","end":"2024-08-18T18:40:15.075327Z","steps":["trace[1801954143] 'process raft request'  (duration: 149.411131ms)"],"step_count":1}
	
	
	==> gcp-auth [1aad7ed876b8] <==
	2024/08/18 18:41:47 GCP Auth Webhook started!
	2024/08/18 18:42:03 Ready to marshal response ...
	2024/08/18 18:42:03 Ready to write response ...
	2024/08/18 18:42:03 Ready to marshal response ...
	2024/08/18 18:42:03 Ready to write response ...
	
	
	==> kernel <==
	 18:45:05 up 6 min,  0 users,  load average: 0.14, 0.61, 0.41
	Linux addons-103000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [95829f16e5f8] <==
	W0818 18:40:08.857949       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:09.906899       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:10.953373       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:12.024540       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:13.070406       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:14.146282       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:15.170614       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:16.189889       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:17.205481       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:18.260896       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:19.333339       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:20.416209       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:21.488782       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:22.082793       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.213.8:443: connect: connection refused
	E0818 18:40:22.082843       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.213.8:443: connect: connection refused" logger="UnhandledError"
	W0818 18:40:22.084442       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:40:22.493652       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.243.150:443: connect: connection refused
	W0818 18:41:02.137438       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.213.8:443: connect: connection refused
	E0818 18:41:02.137487       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.213.8:443: connect: connection refused" logger="UnhandledError"
	W0818 18:41:02.197403       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.213.8:443: connect: connection refused
	E0818 18:41:02.197462       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.213.8:443: connect: connection refused" logger="UnhandledError"
	W0818 18:41:44.012315       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.213.8:443: connect: connection refused
	E0818 18:41:44.012423       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.213.8:443: connect: connection refused" logger="UnhandledError"
	I0818 18:42:03.572204       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0818 18:42:03.631950       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [f8cea5b10020] <==
	I0818 18:41:02.214146       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:02.214377       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:02.221878       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:03.892643       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:03.920915       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:05.060515       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:05.078756       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:06.065647       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:06.070567       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:06.073942       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:06.083784       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:06.089133       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:06.094015       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:36.012895       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0818 18:41:36.013290       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0818 18:41:36.040999       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0818 18:41:36.041221       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0818 18:41:44.035159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="27.675754ms"
	I0818 18:41:44.044754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.551043ms"
	I0818 18:41:44.055068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="10.265607ms"
	I0818 18:41:44.055286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="174.597µs"
	I0818 18:41:47.668933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.96257ms"
	I0818 18:41:47.670067       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="18.77µs"
	I0818 18:42:03.393428       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0818 18:42:16.240165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-103000"
	
	
	==> kube-proxy [765d82e220c3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:38:49.655362       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:38:49.665817       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.2"]
	E0818 18:38:49.666040       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:38:49.747697       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:38:49.747743       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:38:49.747766       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:38:49.775037       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:38:49.777145       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:38:49.777157       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:38:49.786860       1 config.go:197] "Starting service config controller"
	I0818 18:38:49.786924       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:38:49.786940       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:38:49.786966       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:38:49.788683       1 config.go:326] "Starting node config controller"
	I0818 18:38:49.788715       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:38:49.887162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:38:49.887219       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:38:49.889339       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b2bb050b1d87] <==
	W0818 18:38:39.176900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 18:38:39.176933       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:39.178434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 18:38:39.178547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:39.178707       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:38:39.178822       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0818 18:38:39.178882       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 18:38:39.178960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:39.179117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 18:38:39.179223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:39.179341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 18:38:39.179450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:39.183102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 18:38:39.183198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:40.054891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 18:38:40.054920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:40.066179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:38:40.066329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:40.193981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 18:38:40.194218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:40.202076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 18:38:40.202257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:38:40.222539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 18:38:40.222704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0818 18:38:40.459956       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 18:43:41 addons-103000 kubelet[2048]: E0818 18:43:41.904174    2048 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 18:43:41 addons-103000 kubelet[2048]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 18:43:41 addons-103000 kubelet[2048]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 18:43:41 addons-103000 kubelet[2048]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 18:43:41 addons-103000 kubelet[2048]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 18:43:52 addons-103000 kubelet[2048]: I0818 18:43:52.886760    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:43:52 addons-103000 kubelet[2048]: E0818 18:43:52.886939    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	Aug 18 18:44:07 addons-103000 kubelet[2048]: I0818 18:44:07.887373    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:44:07 addons-103000 kubelet[2048]: E0818 18:44:07.887499    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	Aug 18 18:44:15 addons-103000 kubelet[2048]: I0818 18:44:15.888098    2048 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gkn4q" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:44:18 addons-103000 kubelet[2048]: I0818 18:44:18.886249    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:44:18 addons-103000 kubelet[2048]: E0818 18:44:18.886567    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	Aug 18 18:44:29 addons-103000 kubelet[2048]: I0818 18:44:29.886302    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:44:29 addons-103000 kubelet[2048]: E0818 18:44:29.886436    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	Aug 18 18:44:41 addons-103000 kubelet[2048]: E0818 18:44:41.904373    2048 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 18:44:41 addons-103000 kubelet[2048]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 18:44:41 addons-103000 kubelet[2048]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 18:44:41 addons-103000 kubelet[2048]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 18:44:41 addons-103000 kubelet[2048]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 18:44:42 addons-103000 kubelet[2048]: I0818 18:44:42.886714    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:44:42 addons-103000 kubelet[2048]: E0818 18:44:42.886888    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	Aug 18 18:44:43 addons-103000 kubelet[2048]: I0818 18:44:43.885840    2048 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-mg7mv" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:44:52 addons-103000 kubelet[2048]: I0818 18:44:52.887164    2048 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8jjw4" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:44:57 addons-103000 kubelet[2048]: I0818 18:44:57.886478    2048 scope.go:117] "RemoveContainer" containerID="381ea22f1354491902cd9676df8d776d0c5b15daf7f2f0b10560b76fec4adc53"
	Aug 18 18:44:57 addons-103000 kubelet[2048]: E0818 18:44:57.886840    2048 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-bc29t_gadget(9bc5548d-277f-47c6-9926-18782bf727cf)\"" pod="gadget/gadget-bc29t" podUID="9bc5548d-277f-47c6-9926-18782bf727cf"
	
	
	==> storage-provisioner [ed4c8659190e] <==
	I0818 18:38:54.274391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:38:54.296591       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:38:54.296641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 18:38:54.310819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 18:38:54.310949       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-103000_a8914686-6ccf-46d2-8941-05d9bc608d64!
	I0818 18:38:54.311894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"abcdae5b-46d3-474b-b0ed-eb648b372c63", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-103000_a8914686-6ccf-46d2-8941-05d9bc608d64 became leader
	I0818 18:38:54.411520       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-103000_a8914686-6ccf-46d2-8941-05d9bc608d64!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-103000 -n addons-103000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-103000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-bmmvw ingress-nginx-admission-patch-vm7d7 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-103000 describe pod ingress-nginx-admission-create-bmmvw ingress-nginx-admission-patch-vm7d7 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-103000 describe pod ingress-nginx-admission-create-bmmvw ingress-nginx-admission-patch-vm7d7 test-job-nginx-0: exit status 1 (53.853847ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bmmvw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vm7d7" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-103000 describe pod ingress-nginx-admission-create-bmmvw ingress-nginx-admission-patch-vm7d7 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (198.65s)

                                                
                                    
x
+
TestCertOptions (252.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-435000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0818 12:42:37.605991    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:43:35.898534    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:44:03.613042    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-435000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.503001103s)

                                                
                                                
-- stdout --
	* [cert-options-435000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-435000" primary control-plane node in "cert-options-435000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-435000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8e:ff:1f:30:1f:93
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-435000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:4f:36:98:e8:f6
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:4f:36:98:e8:f6
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-435000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-435000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-435000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (164.475307ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-435000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-435000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-435000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-435000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-435000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (161.018596ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-435000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-435000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-435000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-18 12:46:29.88038 -0700 PDT m=+4145.341908508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-435000 -n cert-options-435000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-435000 -n cert-options-435000: exit status 7 (79.35236ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:46:29.958062    6197 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:46:29.958083    6197 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-435000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-435000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-435000: (5.235261355s)
--- FAIL: TestCertOptions (252.18s)

                                                
                                    
x
+
TestCertExpiration (1712.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0818 12:41:19.775287    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:41:31.183093    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:41:48.096663    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.720834783s)

                                                
                                                
-- stdout --
	* [cert-expiration-048000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-048000" primary control-plane node in "cert-expiration-048000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-048000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:d6:73:ff:45:f1
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-048000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:7d:39:8d:b5:2e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 32:7d:39:8d:b5:2e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m20.375413009s)

                                                
                                                
-- stdout --
	* [cert-expiration-048000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-048000" primary control-plane node in "cert-expiration-048000" cluster
	* Updating the running hyperkit "cert-expiration-048000" VM ...
	* Updating the running hyperkit "cert-expiration-048000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-048000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-048000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-048000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-048000" primary control-plane node in "cert-expiration-048000" cluster
	* Updating the running hyperkit "cert-expiration-048000" VM ...
	* Updating the running hyperkit "cert-expiration-048000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-048000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-18 13:09:46.925372 -0700 PDT m=+5542.238863291
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-048000 -n cert-expiration-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-048000 -n cert-expiration-048000: exit status 7 (91.56091ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 13:09:47.014657    7832 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 13:09:47.014679    7832 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-048000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-048000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-048000: (5.261529152s)
--- FAIL: TestCertExpiration (1712.45s)

                                                
                                    
x
+
TestDockerFlags (252.32s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-387000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0818 12:38:35.905649    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:35.912905    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:35.924723    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:35.948063    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:35.990412    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:36.073765    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:36.236416    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:36.559781    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:37.202908    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:38.484693    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:41.046973    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:46.170319    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:38:56.411577    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:39:16.893569    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:39:57.855120    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-387000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.559811054s)

                                                
                                                
-- stdout --
	* [docker-flags-387000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-387000" primary control-plane node in "docker-flags-387000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-387000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:38:10.755668    5927 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:38:10.755948    5927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:10.755953    5927 out.go:358] Setting ErrFile to fd 2...
	I0818 12:38:10.755957    5927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:38:10.756134    5927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:38:10.757718    5927 out.go:352] Setting JSON to false
	I0818 12:38:10.781448    5927 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4061,"bootTime":1724005829,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:38:10.781563    5927 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:38:10.801971    5927 out.go:177] * [docker-flags-387000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:38:10.844584    5927 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:38:10.844609    5927 notify.go:220] Checking for updates...
	I0818 12:38:10.886433    5927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:38:10.907360    5927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:38:10.928486    5927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:38:10.949297    5927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:38:10.970648    5927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:38:10.991914    5927 config.go:182] Loaded profile config "force-systemd-flag-608000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:38:10.992008    5927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:38:11.021570    5927 out.go:177] * Using the hyperkit driver based on user configuration
	I0818 12:38:11.063499    5927 start.go:297] selected driver: hyperkit
	I0818 12:38:11.063512    5927 start.go:901] validating driver "hyperkit" against <nil>
	I0818 12:38:11.063536    5927 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:38:11.066510    5927 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:38:11.066635    5927 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:38:11.075209    5927 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:38:11.079066    5927 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:38:11.079088    5927 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:38:11.079124    5927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:38:11.079334    5927 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0818 12:38:11.079394    5927 cni.go:84] Creating CNI manager for ""
	I0818 12:38:11.079409    5927 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:38:11.079415    5927 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:38:11.079486    5927 start.go:340] cluster config:
	{Name:docker-flags-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:38:11.079574    5927 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:38:11.100344    5927 out.go:177] * Starting "docker-flags-387000" primary control-plane node in "docker-flags-387000" cluster
	I0818 12:38:11.121395    5927 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:38:11.121431    5927 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:38:11.121443    5927 cache.go:56] Caching tarball of preloaded images
	I0818 12:38:11.121557    5927 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:38:11.121566    5927 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:38:11.121650    5927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/docker-flags-387000/config.json ...
	I0818 12:38:11.121667    5927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/docker-flags-387000/config.json: {Name:mk5b5c1d87ef77eb581a7a8af356c04f9fd31e1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:38:11.122049    5927 start.go:360] acquireMachinesLock for docker-flags-387000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:39:08.071146    5927 start.go:364] duration metric: took 56.950448997s to acquireMachinesLock for "docker-flags-387000"
	I0818 12:39:08.071198    5927 start.go:93] Provisioning new machine with config: &{Name:docker-flags-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:39:08.071260    5927 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:39:08.092652    5927 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:39:08.092783    5927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:39:08.092817    5927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:39:08.101534    5927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53675
	I0818 12:39:08.102124    5927 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:39:08.102619    5927 main.go:141] libmachine: Using API Version  1
	I0818 12:39:08.102630    5927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:39:08.102898    5927 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:39:08.103050    5927 main.go:141] libmachine: (docker-flags-387000) Calling .GetMachineName
	I0818 12:39:08.103161    5927 main.go:141] libmachine: (docker-flags-387000) Calling .DriverName
	I0818 12:39:08.103266    5927 start.go:159] libmachine.API.Create for "docker-flags-387000" (driver="hyperkit")
	I0818 12:39:08.103298    5927 client.go:168] LocalClient.Create starting
	I0818 12:39:08.103330    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:39:08.103381    5927 main.go:141] libmachine: Decoding PEM data...
	I0818 12:39:08.103398    5927 main.go:141] libmachine: Parsing certificate...
	I0818 12:39:08.103471    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:39:08.103509    5927 main.go:141] libmachine: Decoding PEM data...
	I0818 12:39:08.103519    5927 main.go:141] libmachine: Parsing certificate...
	I0818 12:39:08.103537    5927 main.go:141] libmachine: Running pre-create checks...
	I0818 12:39:08.103551    5927 main.go:141] libmachine: (docker-flags-387000) Calling .PreCreateCheck
	I0818 12:39:08.103625    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.103777    5927 main.go:141] libmachine: (docker-flags-387000) Calling .GetConfigRaw
	I0818 12:39:08.134780    5927 main.go:141] libmachine: Creating machine...
	I0818 12:39:08.134813    5927 main.go:141] libmachine: (docker-flags-387000) Calling .Create
	I0818 12:39:08.134898    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.135069    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:39:08.134907    5970 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:39:08.135131    5927 main.go:141] libmachine: (docker-flags-387000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:39:08.364160    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:39:08.364092    5970 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/id_rsa...
	I0818 12:39:08.473243    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:39:08.473161    5970 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk...
	I0818 12:39:08.473256    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Writing magic tar header
	I0818 12:39:08.473299    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Writing SSH key tar header
	I0818 12:39:08.473906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:39:08.473830    5970 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000 ...
	I0818 12:39:08.846011    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.846036    5927 main.go:141] libmachine: (docker-flags-387000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid
	I0818 12:39:08.846073    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Using UUID aede184f-24e1-4a09-8c0b-b7fa8874a650
	I0818 12:39:08.871439    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Generated MAC ae:36:78:ed:63:ce
	I0818 12:39:08.871457    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000
	I0818 12:39:08.871493    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"aede184f-24e1-4a09-8c0b-b7fa8874a650", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0818 12:39:08.871536    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"aede184f-24e1-4a09-8c0b-b7fa8874a650", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0818 12:39:08.871600    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "aede184f-24e1-4a09-8c0b-b7fa8874a650", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage,/Users/jenkins/m
inikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000"}
	I0818 12:39:08.871644    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U aede184f-24e1-4a09-8c0b-b7fa8874a650 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags
-387000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000"
	I0818 12:39:08.871661    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:39:08.874665    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 DEBUG: hyperkit: Pid is 5972
	I0818 12:39:08.875238    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 0
	I0818 12:39:08.875256    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.875319    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:08.876236    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:08.876319    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:08.876336    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:08.876356    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:08.876371    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:08.876394    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:08.876416    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:08.876431    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:08.876448    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:08.876472    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:08.876488    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:08.876501    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:08.876510    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:08.876530    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:08.876540    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:08.876554    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:08.876571    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:08.876579    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:08.876592    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:08.882511    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:39:08.890476    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:39:08.891499    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:39:08.891534    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:39:08.891552    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:39:08.891566    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:39:09.266328    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:39:09.266351    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:39:09.380947    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:39:09.380968    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:39:09.381007    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:39:09.381042    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:39:09.381854    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:39:09.381870    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:39:10.878435    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 1
	I0818 12:39:10.878465    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:10.878542    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:10.879321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:10.879373    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:10.879384    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:10.879392    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:10.879398    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:10.879407    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:10.879415    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:10.879439    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:10.879453    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:10.879462    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:10.879472    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:10.879480    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:10.879488    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:10.879502    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:10.879515    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:10.879534    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:10.879542    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:10.879548    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:10.879556    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:12.881517    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 2
	I0818 12:39:12.881546    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:12.881622    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:12.882411    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:12.882424    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:12.882433    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:12.882442    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:12.882450    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:12.882458    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:12.882472    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:12.882481    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:12.882494    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:12.882502    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:12.882521    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:12.882538    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:12.882554    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:12.882566    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:12.882574    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:12.882583    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:12.882590    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:12.882597    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:12.882610    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:14.779738    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:39:14.779842    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:39:14.779852    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:39:14.799766    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:39:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:39:14.884342    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 3
	I0818 12:39:14.884366    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:14.884574    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:14.885611    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:14.885707    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:14.885724    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:14.885746    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:14.885769    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:14.885789    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:14.885813    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:14.885828    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:14.885838    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:14.885849    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:14.885859    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:14.885867    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:14.885876    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:14.885885    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:14.885904    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:14.885918    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:14.885934    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:14.885944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:14.885957    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:16.886894    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 4
	I0818 12:39:16.886911    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:16.887006    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:16.887779    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:16.887843    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:16.887856    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:16.887867    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:16.887877    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:16.887891    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:16.887898    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:16.887906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:16.887912    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:16.887918    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:16.887928    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:16.887935    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:16.887943    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:16.887959    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:16.887972    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:16.887980    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:16.887988    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:16.888000    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:16.888009    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:18.889254    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 5
	I0818 12:39:18.889269    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:18.889332    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:18.890134    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:18.890190    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:18.890201    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:18.890215    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:18.890222    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:18.890237    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:18.890249    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:18.890257    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:18.890265    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:18.890282    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:18.890291    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:18.890301    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:18.890310    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:18.890318    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:18.890325    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:18.890340    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:18.890353    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:18.890367    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:18.890380    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:20.890578    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 6
	I0818 12:39:20.890591    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:20.890660    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:20.891680    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:20.891725    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:20.891738    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:20.891752    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:20.891765    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:20.891786    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:20.891793    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:20.891799    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:20.891809    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:20.891816    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:20.891824    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:20.891840    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:20.891854    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:20.891864    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:20.891873    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:20.891880    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:20.891889    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:20.891896    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:20.891905    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:22.892577    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 7
	I0818 12:39:22.892591    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:22.892644    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:22.893465    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:22.893524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:22.893542    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:22.893557    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:22.893567    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:22.893578    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:22.893600    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:22.893620    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:22.893633    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:22.893641    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:22.893650    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:22.893657    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:22.893664    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:22.893672    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:22.893685    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:22.893698    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:22.893708    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:22.893720    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:22.893735    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:24.894579    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 8
	I0818 12:39:24.894591    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:24.894658    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:24.895436    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:24.895489    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:24.895499    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:24.895539    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:24.895552    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:24.895566    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:24.895578    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:24.895590    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:24.895601    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:24.895609    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:24.895623    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:24.895631    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:24.895638    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:24.895645    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:24.895651    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:24.895669    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:24.895682    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:24.895696    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:24.895705    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:26.897656    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 9
	I0818 12:39:26.897670    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:26.897720    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:26.898586    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:26.898651    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:26.898661    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:26.898668    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:26.898684    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:26.898690    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:26.898698    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:26.898707    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:26.898721    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:26.898734    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:26.898750    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:26.898766    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:26.898780    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:26.898798    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:26.898815    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:26.898828    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:26.898836    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:26.898844    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:26.898853    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:28.900026    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 10
	I0818 12:39:28.900042    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:28.900101    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:28.900917    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:28.900968    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:28.900978    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:28.900989    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:28.900996    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:28.901012    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:28.901025    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:28.901032    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:28.901039    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:28.901054    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:28.901066    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:28.901077    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:28.901088    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:28.901104    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:28.901116    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:28.901124    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:28.901133    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:28.901143    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:28.901151    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:30.903105    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 11
	I0818 12:39:30.903124    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:30.903184    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:30.903940    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:30.903984    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:30.903994    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:30.904012    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:30.904027    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:30.904041    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:30.904051    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:30.904059    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:30.904067    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:30.904084    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:30.904092    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:30.904116    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:30.904146    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:30.904155    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:30.904164    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:30.904179    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:30.904193    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:30.904202    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:30.904210    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:32.904902    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 12
	I0818 12:39:32.904915    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:32.904978    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:32.905747    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:32.905812    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:32.905823    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:32.905842    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:32.905850    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:32.905856    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:32.905865    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:32.905872    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:32.905884    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:32.905892    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:32.905900    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:32.905906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:32.905925    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:32.905937    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:32.905945    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:32.905954    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:32.905961    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:32.905969    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:32.905986    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:34.907745    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 13
	I0818 12:39:34.907758    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:34.907831    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:34.908606    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:34.908649    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:34.908661    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:34.908672    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:34.908678    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:34.908690    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:34.908703    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:34.908711    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:34.908730    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:34.908740    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:34.908752    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:34.908759    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:34.908768    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:34.908782    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:34.908799    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:34.908811    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:34.908819    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:34.908827    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:34.908836    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:36.909729    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 14
	I0818 12:39:36.909744    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:36.909813    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:36.910606    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:36.910656    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:36.910667    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:36.910675    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:36.910681    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:36.910688    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:36.910693    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:36.910699    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:36.910706    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:36.910719    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:36.910732    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:36.910742    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:36.910762    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:36.910778    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:36.910790    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:36.910798    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:36.910807    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:36.910814    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:36.910822    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:38.912775    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 15
	I0818 12:39:38.912789    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:38.912858    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:38.913670    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:38.913730    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:38.913743    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:38.913755    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:38.913771    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:38.913782    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:38.913790    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:38.913800    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:38.913809    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:38.913816    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:38.913825    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:38.913840    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:38.913853    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:38.913862    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:38.913870    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:38.913885    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:38.913898    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:38.913906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:38.913914    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:40.914176    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 16
	I0818 12:39:40.914189    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:40.914271    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:40.915073    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:40.915134    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:40.915143    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:40.915152    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:40.915158    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:40.915167    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:40.915173    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:40.915180    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:40.915185    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:40.915191    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:40.915198    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:40.915203    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:40.915217    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:40.915233    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:40.915242    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:40.915249    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:40.915258    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:40.915267    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:40.915282    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:42.915664    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 17
	I0818 12:39:42.915677    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:42.915739    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:42.916545    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:42.916585    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:42.916596    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:42.916606    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:42.916613    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:42.916621    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:42.916628    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:42.916639    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:42.916647    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:42.916654    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:42.916662    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:42.916667    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:42.916687    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:42.916700    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:42.916708    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:42.916715    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:42.916727    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:42.916735    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:42.916743    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:44.917370    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 18
	I0818 12:39:44.917385    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:44.917467    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:44.918269    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:44.918305    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:44.918313    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:44.918324    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:44.918330    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:44.918338    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:44.918345    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:44.918353    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:44.918362    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:44.918373    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:44.918380    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:44.918386    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:44.918400    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:44.918413    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:44.918421    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:44.918429    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:44.918436    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:44.918445    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:44.918454    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:46.920496    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 19
	I0818 12:39:46.920510    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:46.920581    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:46.921348    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:46.921401    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:46.921419    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:46.921429    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:46.921436    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:46.921443    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:46.921452    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:46.921459    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:46.921466    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:46.921476    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:46.921483    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:46.921489    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:46.921497    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:46.921511    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:46.921523    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:46.921531    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:46.921544    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:46.921560    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:46.921573    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:48.922013    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 20
	I0818 12:39:48.922028    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:48.922069    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:48.922876    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:48.922922    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:48.922932    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:48.922946    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:48.922952    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:48.922958    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:48.922965    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:48.922971    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:48.922977    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:48.922985    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:48.922993    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:48.923023    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:48.923035    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:48.923043    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:48.923050    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:48.923058    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:48.923065    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:48.923072    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:48.923080    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:50.924351    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 21
	I0818 12:39:50.924363    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:50.924425    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:50.925190    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:50.925238    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:50.925248    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:50.925257    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:50.925264    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:50.925278    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:50.925290    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:50.925298    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:50.925316    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:50.925323    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:50.925333    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:50.925342    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:50.925351    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:50.925361    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:50.925368    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:50.925377    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:50.925384    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:50.925392    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:50.925408    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:52.927389    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 22
	I0818 12:39:52.927404    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:52.927457    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:52.928252    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:52.928294    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:52.928303    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:52.928321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:52.928328    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:52.928335    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:52.928343    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:52.928366    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:52.928378    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:52.928387    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:52.928393    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:52.928402    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:52.928410    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:52.928416    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:52.928423    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:52.928440    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:52.928454    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:52.928462    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:52.928471    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:54.930451    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 23
	I0818 12:39:54.930466    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:54.930514    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:54.931348    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:54.931386    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:54.931397    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:54.931409    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:54.931415    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:54.931423    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:54.931430    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:54.931438    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:54.931447    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:54.931474    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:54.931487    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:54.931494    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:54.931506    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:54.931516    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:54.931524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:54.931540    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:54.931552    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:54.931560    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:54.931567    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:56.932741    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 24
	I0818 12:39:56.932757    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:56.932804    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:56.933687    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:56.933727    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:56.933741    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:56.933764    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:56.933777    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:56.933785    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:56.933794    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:56.933801    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:56.933811    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:56.933833    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:56.933847    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:56.933855    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:56.933863    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:56.933878    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:56.933892    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:56.933900    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:56.933906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:56.933923    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:56.933936    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:58.935308    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 25
	I0818 12:39:58.935324    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:58.935380    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:39:58.936146    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:39:58.936192    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:58.936202    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:58.936213    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:58.936223    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:58.936231    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:58.936238    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:58.936253    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:58.936264    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:58.936286    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:58.936305    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:58.936323    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:58.936337    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:58.936344    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:58.936354    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:58.936378    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:58.936388    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:58.936398    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:58.936406    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:00.936694    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 26
	I0818 12:40:00.936711    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:00.936785    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:00.937584    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:40:00.937625    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:00.937641    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:00.937652    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:00.937658    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:00.937666    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:00.937672    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:00.937681    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:00.937689    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:00.937696    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:00.937701    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:00.937715    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:00.937726    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:00.937744    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:00.937755    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:00.937762    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:00.937770    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:00.937776    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:00.937784    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:02.938246    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 27
	I0818 12:40:02.938261    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:02.938379    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:02.939152    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:40:02.939200    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:02.939212    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:02.939220    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:02.939227    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:02.939234    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:02.939239    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:02.939255    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:02.939266    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:02.939273    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:02.939281    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:02.939302    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:02.939314    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:02.939325    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:02.939330    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:02.939337    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:02.939343    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:02.939350    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:02.939357    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:04.941346    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 28
	I0818 12:40:04.941360    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:04.941411    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:04.942186    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:40:04.942231    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:04.942241    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:04.942249    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:04.942256    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:04.942267    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:04.942274    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:04.942296    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:04.942311    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:04.942326    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:04.942334    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:04.942341    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:04.942349    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:04.942357    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:04.942367    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:04.942384    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:04.942397    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:04.942410    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:04.942420    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:06.944057    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 29
	I0818 12:40:06.944069    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:06.944171    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:06.944947    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for ae:36:78:ed:63:ce in /var/db/dhcpd_leases ...
	I0818 12:40:06.945000    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:06.945010    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:06.945019    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:06.945025    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:06.945032    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:06.945038    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:06.945061    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:06.945070    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:06.945093    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:06.945106    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:06.945116    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:06.945125    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:06.945140    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:06.945151    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:06.945158    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:06.945165    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:06.945172    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:06.945179    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:08.947139    5927 client.go:171] duration metric: took 1m0.84530078s to LocalClient.Create
	I0818 12:40:10.947312    5927 start.go:128] duration metric: took 1m2.87753055s to createHost
	I0818 12:40:10.947367    5927 start.go:83] releasing machines lock for "docker-flags-387000", held for 1m2.8776975s
	W0818 12:40:10.947383    5927 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:36:78:ed:63:ce
	I0818 12:40:10.947724    5927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:40:10.947747    5927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:40:10.957311    5927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53677
	I0818 12:40:10.957763    5927 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:40:10.958250    5927 main.go:141] libmachine: Using API Version  1
	I0818 12:40:10.958283    5927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:40:10.958591    5927 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:40:10.958970    5927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:40:10.959012    5927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:40:10.968059    5927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53679
	I0818 12:40:10.968574    5927 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:40:10.969033    5927 main.go:141] libmachine: Using API Version  1
	I0818 12:40:10.969064    5927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:40:10.969339    5927 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:40:10.969456    5927 main.go:141] libmachine: (docker-flags-387000) Calling .GetState
	I0818 12:40:10.969546    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:10.969627    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:10.970610    5927 main.go:141] libmachine: (docker-flags-387000) Calling .DriverName
	I0818 12:40:10.991946    5927 out.go:177] * Deleting "docker-flags-387000" in hyperkit ...
	I0818 12:40:11.034057    5927 main.go:141] libmachine: (docker-flags-387000) Calling .Remove
	I0818 12:40:11.034222    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.034237    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.034312    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:11.035239    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.035321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | waiting for graceful shutdown
	I0818 12:40:12.037436    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:12.037495    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:12.038392    5927 main.go:141] libmachine: (docker-flags-387000) DBG | waiting for graceful shutdown
	I0818 12:40:13.039446    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:13.039569    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:13.041233    5927 main.go:141] libmachine: (docker-flags-387000) DBG | waiting for graceful shutdown
	I0818 12:40:14.041707    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:14.041779    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:14.042567    5927 main.go:141] libmachine: (docker-flags-387000) DBG | waiting for graceful shutdown
	I0818 12:40:15.044139    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:15.044218    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:15.044799    5927 main.go:141] libmachine: (docker-flags-387000) DBG | waiting for graceful shutdown
	I0818 12:40:16.045615    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:16.045670    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 5972
	I0818 12:40:16.046614    5927 main.go:141] libmachine: (docker-flags-387000) DBG | sending sigkill
	I0818 12:40:16.046624    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:16.057639    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:40:16 WARN : hyperkit: failed to read stdout: EOF
	I0818 12:40:16.057663    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:40:16 WARN : hyperkit: failed to read stderr: EOF
	W0818 12:40:16.079561    5927 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:36:78:ed:63:ce
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ae:36:78:ed:63:ce
	I0818 12:40:16.079580    5927 start.go:729] Will try again in 5 seconds ...
	I0818 12:40:21.081460    5927 start.go:360] acquireMachinesLock for docker-flags-387000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:41:13.920672    5927 start.go:364] duration metric: took 52.840469784s to acquireMachinesLock for "docker-flags-387000"
	I0818 12:41:13.920707    5927 start.go:93] Provisioning new machine with config: &{Name:docker-flags-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-387000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:41:13.920783    5927 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:41:13.962817    5927 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:41:13.962880    5927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:41:13.962908    5927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:41:13.971608    5927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53683
	I0818 12:41:13.971964    5927 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:41:13.972301    5927 main.go:141] libmachine: Using API Version  1
	I0818 12:41:13.972312    5927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:41:13.972517    5927 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:41:13.972638    5927 main.go:141] libmachine: (docker-flags-387000) Calling .GetMachineName
	I0818 12:41:13.972739    5927 main.go:141] libmachine: (docker-flags-387000) Calling .DriverName
	I0818 12:41:13.972859    5927 start.go:159] libmachine.API.Create for "docker-flags-387000" (driver="hyperkit")
	I0818 12:41:13.972874    5927 client.go:168] LocalClient.Create starting
	I0818 12:41:13.972903    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:41:13.972951    5927 main.go:141] libmachine: Decoding PEM data...
	I0818 12:41:13.972961    5927 main.go:141] libmachine: Parsing certificate...
	I0818 12:41:13.973010    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:41:13.973050    5927 main.go:141] libmachine: Decoding PEM data...
	I0818 12:41:13.973062    5927 main.go:141] libmachine: Parsing certificate...
	I0818 12:41:13.973075    5927 main.go:141] libmachine: Running pre-create checks...
	I0818 12:41:13.973081    5927 main.go:141] libmachine: (docker-flags-387000) Calling .PreCreateCheck
	I0818 12:41:13.973165    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:13.973200    5927 main.go:141] libmachine: (docker-flags-387000) Calling .GetConfigRaw
	I0818 12:41:13.984080    5927 main.go:141] libmachine: Creating machine...
	I0818 12:41:13.984094    5927 main.go:141] libmachine: (docker-flags-387000) Calling .Create
	I0818 12:41:13.984232    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:13.984355    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:41:13.984209    6020 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:41:13.984376    5927 main.go:141] libmachine: (docker-flags-387000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:41:14.403953    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:41:14.403898    6020 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/id_rsa...
	I0818 12:41:14.567321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:41:14.567254    6020 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk...
	I0818 12:41:14.567335    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Writing magic tar header
	I0818 12:41:14.567348    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Writing SSH key tar header
	I0818 12:41:14.588355    5927 main.go:141] libmachine: (docker-flags-387000) DBG | I0818 12:41:14.588321    6020 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000 ...
	I0818 12:41:14.961664    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:14.961684    5927 main.go:141] libmachine: (docker-flags-387000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid
	I0818 12:41:14.961719    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Using UUID 6866ae4d-6296-40dc-9bf0-e7f52fb706e1
	I0818 12:41:14.987325    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Generated MAC f6:aa:70:3:f4:f7
	I0818 12:41:14.987342    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000
	I0818 12:41:14.987371    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6866ae4d-6296-40dc-9bf0-e7f52fb706e1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0818 12:41:14.987395    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"6866ae4d-6296-40dc-9bf0-e7f52fb706e1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0818 12:41:14.987444    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "6866ae4d-6296-40dc-9bf0-e7f52fb706e1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage,/Users/jenkins/m
inikube-integration/19423-1007/.minikube/machines/docker-flags-387000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000"}
	I0818 12:41:14.987483    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 6866ae4d-6296-40dc-9bf0-e7f52fb706e1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/docker-flags-387000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags
-387000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-387000"
	I0818 12:41:14.987497    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:41:14.990410    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 DEBUG: hyperkit: Pid is 6034
	I0818 12:41:14.990954    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 0
	I0818 12:41:14.990978    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:14.991000    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:14.991959    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:14.992032    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:14.992046    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:14.992071    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:14.992083    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:14.992096    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:14.992111    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:14.992122    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:14.992135    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:14.992149    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:14.992161    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:14.992174    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:14.992187    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:14.992193    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:14.992205    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:14.992217    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:14.992240    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:14.992258    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:14.992268    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:14.998192    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:41:15.006439    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/docker-flags-387000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:41:15.007457    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:41:15.007495    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:41:15.007507    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:41:15.007518    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:41:15.384688    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:41:15.384704    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:41:15.499400    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:41:15.499420    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:41:15.499434    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:41:15.499444    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:41:15.500317    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:41:15.500335    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:41:16.994244    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 1
	I0818 12:41:16.994272    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:16.994295    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:16.995104    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:16.995184    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:16.995203    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:16.995215    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:16.995224    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:16.995247    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:16.995259    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:16.995268    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:16.995273    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:16.995280    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:16.995291    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:16.995298    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:16.995304    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:16.995311    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:16.995319    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:16.995324    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:16.995331    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:16.995337    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:16.995348    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:18.995767    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 2
	I0818 12:41:18.995803    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:18.995906    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:18.996758    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:18.996836    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:18.996847    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:18.996857    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:18.996864    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:18.996870    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:18.996876    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:18.996895    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:18.996902    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:18.996931    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:18.996944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:18.996956    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:18.996966    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:18.996981    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:18.996995    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:18.997005    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:18.997013    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:18.997021    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:18.997028    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:20.932800    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:41:20.932918    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:41:20.932929    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:41:20.952939    5927 main.go:141] libmachine: (docker-flags-387000) DBG | 2024/08/18 12:41:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:41:20.998245    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 3
	I0818 12:41:20.998274    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:20.998476    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:20.999886    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:21.000017    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:21.000038    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:21.000056    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:21.000075    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:21.000090    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:21.000106    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:21.000143    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:21.000173    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:21.000203    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:21.000216    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:21.000234    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:21.000246    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:21.000255    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:21.000266    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:21.000280    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:21.000291    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:21.000302    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:21.000312    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:23.000296    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 4
	I0818 12:41:23.000311    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:23.000391    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:23.001187    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:23.001259    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:23.001270    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:23.001277    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:23.001284    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:23.001292    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:23.001298    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:23.001305    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:23.001314    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:23.001335    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:23.001353    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:23.001366    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:23.001372    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:23.001381    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:23.001390    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:23.001397    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:23.001403    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:23.001410    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:23.001433    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:25.003428    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 5
	I0818 12:41:25.003442    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:25.003494    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:25.004274    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:25.004342    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:25.004352    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:25.004365    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:25.004391    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:25.004407    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:25.004418    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:25.004428    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:25.004435    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:25.004446    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:25.004454    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:25.004463    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:25.004470    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:25.004478    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:25.004485    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:25.004492    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:25.004500    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:25.004515    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:25.004527    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:27.006461    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 6
	I0818 12:41:27.006473    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:27.006572    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:27.007402    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:27.007441    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:27.007452    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:27.007462    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:27.007469    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:27.007481    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:27.007494    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:27.007502    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:27.007510    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:27.007517    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:27.007522    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:27.007529    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:27.007537    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:27.007547    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:27.007553    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:27.007569    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:27.007583    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:27.007592    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:27.007611    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:29.008991    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 7
	I0818 12:41:29.009002    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:29.009065    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:29.009852    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:29.009899    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:29.009909    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:29.009918    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:29.009924    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:29.009935    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:29.009944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:29.009951    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:29.009958    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:29.009964    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:29.009972    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:29.009980    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:29.009994    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:29.010002    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:29.010009    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:29.010018    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:29.010027    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:29.010036    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:29.010044    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:31.010642    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 8
	I0818 12:41:31.010655    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:31.010736    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:31.011511    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:31.011549    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:31.011561    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:31.011570    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:31.011576    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:31.011588    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:31.011596    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:31.011602    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:31.011611    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:31.011619    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:31.011625    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:31.011639    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:31.011656    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:31.011666    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:31.011674    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:31.011682    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:31.011690    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:31.011697    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:31.011703    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:33.011802    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 9
	I0818 12:41:33.011817    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:33.011885    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:33.012674    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:33.012715    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:33.012725    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:33.012743    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:33.012752    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:33.012768    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:33.012780    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:33.012790    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:33.012802    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:33.012813    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:33.012821    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:33.012829    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:33.012845    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:33.012853    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:33.012860    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:33.012867    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:33.012884    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:33.012896    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:33.012904    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:35.013529    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 10
	I0818 12:41:35.013543    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:35.013596    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:35.014377    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:35.014430    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:35.014440    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:35.014451    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:35.014464    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:35.014483    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:35.014495    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:35.014504    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:35.014524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:35.014541    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:35.014560    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:35.014568    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:35.014577    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:35.014583    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:35.014591    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:35.014608    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:35.014661    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:35.014671    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:35.014679    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:37.016559    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 11
	I0818 12:41:37.016574    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:37.016625    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:37.017462    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:37.017509    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:37.017520    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:37.017527    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:37.017534    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:37.017542    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:37.017553    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:37.017561    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:37.017569    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:37.017584    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:37.017595    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:37.017606    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:37.017612    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:37.017647    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:37.017658    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:37.017674    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:37.017683    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:37.017690    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:37.017699    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:39.019603    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 12
	I0818 12:41:39.019615    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:39.019672    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:39.020485    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:39.020523    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:39.020532    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:39.020545    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:39.020555    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:39.020567    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:39.020573    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:39.020579    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:39.020585    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:39.020593    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:39.020602    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:39.020616    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:39.020637    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:39.020654    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:39.020664    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:39.020673    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:39.020681    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:39.020688    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:39.020699    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:41.022418    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 13
	I0818 12:41:41.022432    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:41.022500    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:41.023270    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:41.023313    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:41.023325    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:41.023337    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:41.023347    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:41.023361    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:41.023374    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:41.023387    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:41.023397    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:41.023404    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:41.023415    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:41.023421    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:41.023430    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:41.023439    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:41.023446    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:41.023454    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:41.023461    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:41.023468    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:41.023475    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:43.024475    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 14
	I0818 12:41:43.024487    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:43.024551    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:43.025590    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:43.025653    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:43.025664    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:43.025673    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:43.025680    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:43.025686    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:43.025701    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:43.025708    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:43.025715    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:43.025734    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:43.025745    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:43.025753    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:43.025773    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:43.025793    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:43.025802    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:43.025815    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:43.025823    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:43.025832    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:43.025838    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:45.026951    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 15
	I0818 12:41:45.026964    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:45.027018    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:45.027787    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:45.027836    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:45.027853    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:45.027863    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:45.027882    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:45.027897    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:45.027917    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:45.027926    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:45.027933    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:45.027945    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:45.027958    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:45.027978    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:45.027987    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:45.027997    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:45.028006    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:45.028014    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:45.028022    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:45.028030    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:45.028038    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:47.030000    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 16
	I0818 12:41:47.030012    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:47.030065    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:47.030845    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:47.030898    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:47.030913    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:47.030930    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:47.030940    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:47.030947    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:47.030956    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:47.030972    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:47.030984    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:47.030993    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:47.031018    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:47.031030    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:47.031036    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:47.031048    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:47.031057    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:47.031064    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:47.031071    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:47.031085    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:47.031097    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:49.032944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 17
	I0818 12:41:49.032961    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:49.033021    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:49.033849    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:49.033882    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:49.033891    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:49.033901    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:49.033909    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:49.033916    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:49.033922    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:49.033928    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:49.033937    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:49.033944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:49.033950    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:49.033960    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:49.033967    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:49.033974    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:49.033982    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:49.033991    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:49.033999    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:49.034008    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:49.034015    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:51.035930    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 18
	I0818 12:41:51.035944    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:51.036002    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:51.036857    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:51.036921    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:51.036932    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:51.036940    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:51.036945    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:51.036973    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:51.036990    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:51.037009    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:51.037020    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:51.037029    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:51.037037    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:51.037043    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:51.037052    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:51.037059    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:51.037067    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:51.037074    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:51.037088    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:51.037095    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:51.037104    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:53.037524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 19
	I0818 12:41:53.037536    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:53.037584    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:53.038357    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:53.038418    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:53.038435    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:53.038454    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:53.038467    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:53.038474    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:53.038491    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:53.038500    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:53.038509    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:53.038516    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:53.038524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:53.038538    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:53.038554    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:53.038563    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:53.038570    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:53.038577    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:53.038585    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:53.038600    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:53.038612    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:55.040581    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 20
	I0818 12:41:55.040594    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:55.040651    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:55.041413    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:55.041470    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:55.041484    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:55.041500    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:55.041516    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:55.041524    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:55.041532    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:55.041539    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:55.041547    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:55.041562    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:55.041571    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:55.041577    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:55.041589    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:55.041600    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:55.041608    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:55.041615    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:55.041621    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:55.041627    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:55.041635    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:57.042044    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 21
	I0818 12:41:57.042060    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:57.042123    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:57.042932    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:57.042969    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:57.042976    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:57.042986    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:57.042995    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:57.043001    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:57.043008    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:57.043022    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:57.043030    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:57.043045    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:57.043056    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:57.043071    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:57.043077    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:57.043084    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:57.043090    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:57.043107    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:57.043118    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:57.043135    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:57.043148    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:59.045066    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 22
	I0818 12:41:59.045082    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:59.045130    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:41:59.045997    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:41:59.046050    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:59.046063    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:59.046085    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:59.046095    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:59.046121    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:59.046138    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:59.046145    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:59.046152    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:59.046159    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:59.046166    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:59.046173    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:59.046198    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:59.046209    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:59.046217    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:59.046223    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:59.046231    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:59.046240    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:59.046246    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:01.046243    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 23
	I0818 12:42:01.046268    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:01.046318    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:01.047083    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:01.047143    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:01.047156    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:01.047166    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:01.047173    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:01.047181    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:01.047190    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:01.047197    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:01.047202    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:01.047209    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:01.047219    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:01.047225    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:01.047231    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:01.047239    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:01.047250    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:01.047262    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:01.047271    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:01.047278    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:01.047294    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:03.049290    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 24
	I0818 12:42:03.049303    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:03.049346    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:03.050205    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:03.050259    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:03.050271    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:03.050279    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:03.050286    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:03.050294    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:03.050302    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:03.050309    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:03.050315    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:03.050321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:03.050326    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:03.050334    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:03.050342    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:03.050353    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:03.050362    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:03.050379    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:03.050392    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:03.050402    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:03.050410    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:05.050951    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 25
	I0818 12:42:05.050965    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:05.051073    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:05.051835    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:05.051882    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:05.051894    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:05.051903    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:05.051909    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:05.051937    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:05.051952    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:05.051960    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:05.051969    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:05.051976    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:05.051992    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:05.052013    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:05.052023    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:05.052029    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:05.052037    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:05.052053    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:05.052066    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:05.052081    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:05.052090    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:07.052640    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 26
	I0818 12:42:07.052660    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:07.052730    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:07.053750    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:07.053787    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:07.053796    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:07.053815    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:07.053826    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:07.053834    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:07.053840    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:07.053859    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:07.053868    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:07.053885    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:07.053897    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:07.053914    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:07.053926    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:07.053936    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:07.053946    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:07.053953    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:07.053961    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:07.053975    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:07.053983    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:09.054888    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 27
	I0818 12:42:09.054905    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:09.054940    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:09.055817    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:09.055868    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:09.055898    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:09.055919    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:09.055943    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:09.055955    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:09.055962    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:09.055968    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:09.055977    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:09.055985    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:09.055990    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:09.056005    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:09.056019    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:09.056027    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:09.056034    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:09.056048    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:09.056059    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:09.056068    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:09.056091    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:11.058011    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 28
	I0818 12:42:11.058032    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:11.058089    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:11.058992    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:11.059044    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:11.059056    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:11.059068    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:11.059079    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:11.059100    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:11.059114    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:11.059122    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:11.059132    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:11.059153    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:11.059166    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:11.059174    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:11.059183    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:11.059191    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:11.059199    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:11.059206    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:11.059212    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:11.059219    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:11.059227    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:13.059321    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Attempt 29
	I0818 12:42:13.059338    5927 main.go:141] libmachine: (docker-flags-387000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:42:13.059408    5927 main.go:141] libmachine: (docker-flags-387000) DBG | hyperkit pid from json: 6034
	I0818 12:42:13.060177    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Searching for f6:aa:70:3:f4:f7 in /var/db/dhcpd_leases ...
	I0818 12:42:13.060222    5927 main.go:141] libmachine: (docker-flags-387000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:42:13.060240    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:42:13.060256    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:42:13.060267    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:42:13.060289    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:42:13.060303    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:42:13.060311    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:42:13.060320    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:42:13.060327    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:42:13.060336    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:42:13.060346    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:42:13.060357    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:42:13.060369    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:42:13.060383    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:42:13.060399    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:42:13.060411    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:42:13.060426    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:42:13.060436    5927 main.go:141] libmachine: (docker-flags-387000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:42:15.062416    5927 client.go:171] duration metric: took 1m1.091010132s to LocalClient.Create
	I0818 12:42:17.064470    5927 start.go:128] duration metric: took 1m3.145204989s to createHost
	I0818 12:42:17.064504    5927 start.go:83] releasing machines lock for "docker-flags-387000", held for 1m3.145336147s
	W0818 12:42:17.064571    5927 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-387000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:aa:70:3:f4:f7
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-387000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:aa:70:3:f4:f7
	I0818 12:42:17.127734    5927 out.go:201] 
	W0818 12:42:17.148703    5927 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:aa:70:3:f4:f7
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:aa:70:3:f4:f7
	W0818 12:42:17.148717    5927 out.go:270] * 
	* 
	W0818 12:42:17.149327    5927 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:42:17.211788    5927 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-387000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-387000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-387000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (180.169455ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-387000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-387000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-387000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-387000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (173.84478ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-387000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-387000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-387000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-18 12:42:17.674705 -0700 PDT m=+3893.130136347
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-387000 -n docker-flags-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-387000 -n docker-flags-387000: exit status 7 (89.139414ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:42:17.761693    6073 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:42:17.761719    6073 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-387000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-387000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-387000: (5.255400389s)
--- FAIL: TestDockerFlags (252.32s)

                                                
                                    
x
+
TestForceSystemdFlag (252.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-608000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0818 12:37:37.611580    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-608000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.597632193s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-608000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-608000" primary control-plane node in "force-systemd-flag-608000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-608000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:37:07.573559    5861 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:37:07.574316    5861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:07.574563    5861 out.go:358] Setting ErrFile to fd 2...
	I0818 12:37:07.574582    5861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:37:07.574918    5861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:37:07.576412    5861 out.go:352] Setting JSON to false
	I0818 12:37:07.599924    5861 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3998,"bootTime":1724005829,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:37:07.600044    5861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:37:07.620476    5861 out.go:177] * [force-systemd-flag-608000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:37:07.662354    5861 notify.go:220] Checking for updates...
	I0818 12:37:07.683212    5861 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:37:07.725215    5861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:37:07.746233    5861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:37:07.766964    5861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:37:07.787229    5861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:37:07.808288    5861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:37:07.829532    5861 config.go:182] Loaded profile config "force-systemd-env-184000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:37:07.829626    5861 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:37:07.858177    5861 out.go:177] * Using the hyperkit driver based on user configuration
	I0818 12:37:07.879184    5861 start.go:297] selected driver: hyperkit
	I0818 12:37:07.879198    5861 start.go:901] validating driver "hyperkit" against <nil>
	I0818 12:37:07.879210    5861 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:37:07.882559    5861 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:07.882674    5861 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:37:07.891039    5861 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:37:07.895333    5861 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:37:07.895355    5861 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:37:07.895394    5861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:37:07.895604    5861 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 12:37:07.895659    5861 cni.go:84] Creating CNI manager for ""
	I0818 12:37:07.895673    5861 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:37:07.895682    5861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:37:07.895748    5861 start.go:340] cluster config:
	{Name:force-systemd-flag-608000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:37:07.895835    5861 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:37:07.917029    5861 out.go:177] * Starting "force-systemd-flag-608000" primary control-plane node in "force-systemd-flag-608000" cluster
	I0818 12:37:07.958150    5861 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:37:07.958185    5861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:37:07.958199    5861 cache.go:56] Caching tarball of preloaded images
	I0818 12:37:07.958315    5861 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:37:07.958325    5861 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:37:07.958407    5861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/force-systemd-flag-608000/config.json ...
	I0818 12:37:07.958425    5861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/force-systemd-flag-608000/config.json: {Name:mk039dc5c245abb5732da5cd072c63369ac3dd47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:37:07.958783    5861 start.go:360] acquireMachinesLock for force-systemd-flag-608000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:38:04.938276    5861 start.go:364] duration metric: took 56.980851444s to acquireMachinesLock for "force-systemd-flag-608000"
	I0818 12:38:04.938320    5861 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:38:04.938382    5861 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:38:04.959729    5861 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:38:04.959868    5861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:38:04.959907    5861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:38:04.968896    5861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53655
	I0818 12:38:04.969348    5861 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:38:04.969914    5861 main.go:141] libmachine: Using API Version  1
	I0818 12:38:04.969924    5861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:38:04.970323    5861 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:38:04.970473    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .GetMachineName
	I0818 12:38:04.970578    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .DriverName
	I0818 12:38:04.970737    5861 start.go:159] libmachine.API.Create for "force-systemd-flag-608000" (driver="hyperkit")
	I0818 12:38:04.970762    5861 client.go:168] LocalClient.Create starting
	I0818 12:38:04.970798    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:38:04.970849    5861 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:04.970869    5861 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:04.970923    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:38:04.970966    5861 main.go:141] libmachine: Decoding PEM data...
	I0818 12:38:04.970976    5861 main.go:141] libmachine: Parsing certificate...
	I0818 12:38:04.970993    5861 main.go:141] libmachine: Running pre-create checks...
	I0818 12:38:04.971002    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .PreCreateCheck
	I0818 12:38:04.971088    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:04.971239    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .GetConfigRaw
	I0818 12:38:05.000719    5861 main.go:141] libmachine: Creating machine...
	I0818 12:38:05.000762    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .Create
	I0818 12:38:05.000850    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:05.000970    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:38:05.000841    5909 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:38:05.001043    5861 main.go:141] libmachine: (force-systemd-flag-608000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:38:05.425332    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:38:05.425236    5909 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/id_rsa...
	I0818 12:38:05.601449    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:38:05.601397    5909 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk...
	I0818 12:38:05.601465    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Writing magic tar header
	I0818 12:38:05.601488    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Writing SSH key tar header
	I0818 12:38:05.601822    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:38:05.601777    5909 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000 ...
	I0818 12:38:05.977350    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:05.977367    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid
	I0818 12:38:05.977413    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Using UUID 0e495ae1-2c18-4322-b5a8-effc3b4a726c
	I0818 12:38:06.002831    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Generated MAC 26:e:59:20:6c:66
	I0818 12:38:06.002848    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000
	I0818 12:38:06.002888    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0e495ae1-2c18-4322-b5a8-effc3b4a726c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:38:06.002926    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0e495ae1-2c18-4322-b5a8-effc3b4a726c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:38:06.002977    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0e495ae1-2c18-4322-b5a8-effc3b4a726c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/fo
rce-systemd-flag-608000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000"}
	I0818 12:38:06.003013    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0e495ae1-2c18-4322-b5a8-effc3b4a726c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage,/Users/jenkins/minikube-integr
ation/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000"
	I0818 12:38:06.003025    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:38:06.006015    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 DEBUG: hyperkit: Pid is 5923
	I0818 12:38:06.006471    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 0
	I0818 12:38:06.006484    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:06.006553    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:06.007479    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:06.007551    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:06.007563    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:06.007587    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:06.007600    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:06.007620    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:06.007642    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:06.007654    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:06.007668    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:06.007682    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:06.007695    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:06.007706    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:06.007721    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:06.007736    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:06.007751    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:06.007828    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:06.007851    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:06.007870    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:06.007889    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:06.013854    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:38:06.021793    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:38:06.022704    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:38:06.022720    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:38:06.022743    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:38:06.022760    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:38:06.395767    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:38:06.395785    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:38:06.510414    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:38:06.510434    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:38:06.510478    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:38:06.510504    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:38:06.511309    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:38:06.511319    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:38:08.008710    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 1
	I0818 12:38:08.008736    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:08.008810    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:08.009624    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:08.009672    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:08.009691    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:08.009715    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:08.009728    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:08.009738    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:08.009754    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:08.009764    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:08.009772    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:08.009782    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:08.009790    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:08.009798    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:08.009805    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:08.009813    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:08.009822    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:08.009830    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:08.009838    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:08.009846    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:08.009854    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:10.010437    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 2
	I0818 12:38:10.010451    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:10.010569    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:10.011458    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:10.011534    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:10.011549    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:10.011558    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:10.011565    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:10.011572    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:10.011578    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:10.011585    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:10.011594    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:10.011603    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:10.011612    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:10.011620    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:10.011629    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:10.011642    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:10.011651    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:10.011658    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:10.011665    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:10.011674    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:10.011691    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:11.922854    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:38:11.923022    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:38:11.923042    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:38:11.942373    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:38:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:38:12.012481    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 3
	I0818 12:38:12.012512    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:12.012646    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:12.014165    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:12.014262    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:12.014282    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:12.014322    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:12.014350    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:12.014366    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:12.014391    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:12.014402    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:12.014411    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:12.014421    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:12.014432    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:12.014443    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:12.014453    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:12.014463    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:12.014493    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:12.014511    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:12.014524    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:12.014537    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:12.014572    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:14.015049    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 4
	I0818 12:38:14.015076    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:14.015141    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:14.015935    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:14.015977    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:14.015992    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:14.016001    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:14.016008    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:14.016020    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:14.016026    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:14.016034    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:14.016043    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:14.016051    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:14.016057    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:14.016072    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:14.016084    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:14.016093    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:14.016102    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:14.016119    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:14.016135    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:14.016144    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:14.016153    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:16.017570    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 5
	I0818 12:38:16.017586    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:16.017649    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:16.018484    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:16.018534    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:16.018550    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:16.018569    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:16.018576    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:16.018583    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:16.018592    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:16.018618    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:16.018634    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:16.018647    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:16.018655    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:16.018662    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:16.018669    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:16.018683    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:16.018697    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:16.018717    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:16.018725    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:16.018736    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:16.018743    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:18.020676    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 6
	I0818 12:38:18.020691    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:18.020844    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:18.021614    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:18.021666    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:18.021676    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:18.021685    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:18.021692    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:18.021699    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:18.021709    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:18.021715    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:18.021722    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:18.021740    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:18.021759    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:18.021768    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:18.021801    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:18.021810    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:18.021843    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:18.021858    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:18.021868    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:18.021876    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:18.021883    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:20.022627    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 7
	I0818 12:38:20.022640    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:20.022686    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:20.023473    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:20.023537    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:20.023548    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:20.023558    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:20.023569    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:20.023582    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:20.023590    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:20.023601    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:20.023620    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:20.023636    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:20.023646    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:20.023654    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:20.023662    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:20.023670    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:20.023678    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:20.023685    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:20.023693    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:20.023700    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:20.023706    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:22.023919    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 8
	I0818 12:38:22.023934    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:22.023990    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:22.024768    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:22.024819    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:22.024829    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:22.024839    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:22.024855    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:22.024872    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:22.024890    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:22.024906    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:22.024919    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:22.024935    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:22.024944    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:22.024952    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:22.024965    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:22.024973    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:22.024981    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:22.024988    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:22.024996    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:22.025008    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:22.025019    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:24.025234    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 9
	I0818 12:38:24.025249    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:24.025309    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:24.026095    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:24.026133    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:24.026141    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:24.026152    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:24.026161    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:24.026168    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:24.026179    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:24.026192    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:24.026207    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:24.026220    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:24.026234    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:24.026245    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:24.026254    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:24.026261    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:24.026269    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:24.026282    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:24.026291    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:24.026299    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:24.026307    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:26.028311    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 10
	I0818 12:38:26.028324    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:26.028371    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:26.029160    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:26.029196    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:26.029205    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:26.029216    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:26.029223    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:26.029230    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:26.029236    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:26.029243    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:26.029251    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:26.029258    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:26.029266    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:26.029283    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:26.029292    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:26.029299    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:26.029322    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:26.029333    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:26.029342    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:26.029349    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:26.029357    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:28.030022    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 11
	I0818 12:38:28.030055    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:28.030125    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:28.030877    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:28.030953    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:28.030987    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:28.030996    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:28.031002    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:28.031009    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:28.031017    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:28.031025    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:28.031032    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:28.031038    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:28.031047    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:28.031062    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:28.031071    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:28.031088    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:28.031096    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:28.031104    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:28.031110    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:28.031120    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:28.031138    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:30.032106    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 12
	I0818 12:38:30.032122    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:30.032161    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:30.032973    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:30.033013    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:30.033024    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:30.033033    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:30.033040    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:30.033058    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:30.033070    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:30.033078    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:30.033088    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:30.033103    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:30.033124    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:30.033131    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:30.033141    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:30.033151    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:30.033160    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:30.033179    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:30.033186    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:30.033196    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:30.033209    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:32.035175    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 13
	I0818 12:38:32.035191    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:32.035247    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:32.036014    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:32.036073    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:32.036099    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:32.036112    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:32.036124    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:32.036134    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:32.036142    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:32.036150    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:32.036163    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:32.036172    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:32.036179    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:32.036190    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:32.036198    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:32.036205    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:32.036214    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:32.036221    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:32.036227    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:32.036234    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:32.036243    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:34.037533    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 14
	I0818 12:38:34.037547    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:34.037613    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:34.038422    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:34.038475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:34.038488    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:34.038498    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:34.038505    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:34.038514    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:34.038523    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:34.038528    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:34.038535    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:34.038542    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:34.038549    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:34.038563    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:34.038579    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:34.038587    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:34.038595    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:34.038608    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:34.038618    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:34.038625    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:34.038632    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:36.038927    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 15
	I0818 12:38:36.038940    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:36.039006    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:36.039751    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:36.039812    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:36.039827    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:36.039841    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:36.039849    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:36.039856    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:36.039862    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:36.039870    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:36.039891    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:36.039910    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:36.039922    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:36.039938    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:36.039954    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:36.039968    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:36.039977    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:36.039984    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:36.039992    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:36.040000    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:36.040005    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:38.040412    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 16
	I0818 12:38:38.040424    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:38.040550    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:38.041327    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:38.041375    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:38.041385    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:38.041399    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:38.041406    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:38.041424    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:38.041438    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:38.041446    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:38.041455    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:38.041465    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:38.041474    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:38.041500    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:38.041515    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:38.041524    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:38.041532    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:38.041545    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:38.041553    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:38.041561    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:38.041569    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:40.043465    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 17
	I0818 12:38:40.043479    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:40.043524    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:40.044286    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:40.044327    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:40.044338    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:40.044347    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:40.044355    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:40.044363    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:40.044370    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:40.044378    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:40.044384    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:40.044391    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:40.044399    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:40.044406    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:40.044412    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:40.044429    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:40.044450    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:40.044459    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:40.044468    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:40.044475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:40.044483    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:42.044676    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 18
	I0818 12:38:42.044692    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:42.044749    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:42.045558    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:42.045604    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:42.045612    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:42.045628    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:42.045636    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:42.045642    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:42.045648    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:42.045665    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:42.045677    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:42.045686    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:42.045695    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:42.045703    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:42.045712    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:42.045719    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:42.045727    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:42.045734    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:42.045743    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:42.045758    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:42.045771    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:44.047076    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 19
	I0818 12:38:44.047092    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:44.047200    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:44.048002    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:44.048056    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:44.048068    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:44.048078    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:44.048087    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:44.048095    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:44.048100    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:44.048107    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:44.048114    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:44.048123    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:44.048130    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:44.048137    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:44.048143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:44.048152    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:44.048161    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:44.048178    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:44.048186    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:44.048194    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:44.048202    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:46.048683    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 20
	I0818 12:38:46.048698    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:46.048787    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:46.049588    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:46.049634    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:46.049648    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:46.049660    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:46.049668    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:46.049682    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:46.049691    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:46.049709    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:46.049720    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:46.049738    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:46.049748    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:46.049756    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:46.049764    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:46.049773    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:46.049782    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:46.049789    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:46.049797    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:46.049805    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:46.049813    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:48.051775    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 21
	I0818 12:38:48.051788    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:48.051879    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:48.052764    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:48.052821    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:48.052832    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:48.052849    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:48.052859    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:48.052866    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:48.052872    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:48.052882    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:48.052890    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:48.052904    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:48.052917    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:48.052935    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:48.052946    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:48.052963    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:48.052979    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:48.052993    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:48.053003    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:48.053014    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:48.053022    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:50.053699    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 22
	I0818 12:38:50.053715    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:50.053761    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:50.054567    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:50.054615    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:50.054625    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:50.054641    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:50.054656    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:50.054666    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:50.054676    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:50.054685    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:50.054697    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:50.054706    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:50.054716    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:50.054724    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:50.054732    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:50.054749    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:50.054757    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:50.054765    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:50.054772    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:50.054778    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:50.054805    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:52.055600    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 23
	I0818 12:38:52.055616    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:52.055692    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:52.056473    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:52.056516    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:52.056527    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:52.056536    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:52.056546    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:52.056575    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:52.056591    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:52.056602    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:52.056610    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:52.056617    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:52.056625    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:52.056643    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:52.056655    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:52.056663    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:52.056672    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:52.056679    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:52.056688    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:52.056695    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:52.056703    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:54.056759    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 24
	I0818 12:38:54.056772    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:54.056816    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:54.057690    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:54.057710    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:54.057718    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:54.057747    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:54.057759    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:54.057766    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:54.057773    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:54.057781    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:54.057789    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:54.057797    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:54.057809    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:54.057818    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:54.057825    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:54.057833    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:54.057840    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:54.057848    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:54.057856    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:54.057864    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:54.057881    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:56.058682    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 25
	I0818 12:38:56.058695    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:56.058754    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:56.059587    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:56.059645    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:56.059656    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:56.059665    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:56.059671    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:56.059678    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:56.059689    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:56.059697    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:56.059706    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:56.059714    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:56.059721    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:56.059735    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:56.059750    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:56.059758    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:56.059765    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:56.059774    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:56.059782    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:56.059797    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:56.059809    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:58.061214    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 26
	I0818 12:38:58.061238    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:58.061288    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:38:58.062108    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:38:58.062160    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:58.062171    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:58.062180    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:58.062190    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:58.062197    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:58.062203    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:58.062210    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:58.062216    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:58.062223    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:58.062230    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:58.062239    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:58.062255    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:58.062263    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:58.062277    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:58.062283    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:58.062294    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:58.062303    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:58.062312    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:00.063422    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 27
	I0818 12:39:00.063438    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:00.063570    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:00.064366    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:39:00.064418    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:00.064428    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:00.064438    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:00.064448    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:00.064455    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:00.064460    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:00.064473    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:00.064485    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:00.064494    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:00.064502    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:00.064518    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:00.064528    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:00.064544    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:00.064552    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:00.064560    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:00.064568    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:00.064575    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:00.064583    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:02.066613    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 28
	I0818 12:39:02.066630    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:02.066678    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:02.067493    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:39:02.067548    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:02.067563    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:02.067581    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:02.067590    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:02.067604    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:02.067614    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:02.067621    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:02.067630    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:02.067636    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:02.067644    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:02.067651    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:02.067659    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:02.067667    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:02.067673    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:02.067680    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:02.067688    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:02.067703    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:02.067715    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:04.067751    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 29
	I0818 12:39:04.067766    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:04.067872    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:04.068645    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 26:e:59:20:6c:66 in /var/db/dhcpd_leases ...
	I0818 12:39:04.068697    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:39:04.068707    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:39:04.068725    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:39:04.068735    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:39:04.068761    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:39:04.068778    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:39:04.068793    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:39:04.068806    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:39:04.068814    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:39:04.068821    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:39:04.068838    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:39:04.068844    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:39:04.068854    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:39:04.068862    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:39:04.068869    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:39:04.068877    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:39:04.068886    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:39:04.068894    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:39:06.068929    5861 client.go:171] duration metric: took 1m1.099632605s to LocalClient.Create
	I0818 12:39:08.071014    5861 start.go:128] duration metric: took 1m3.134142468s to createHost
	I0818 12:39:08.071070    5861 start.go:83] releasing machines lock for "force-systemd-flag-608000", held for 1m3.134306684s
	W0818 12:39:08.071121    5861 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:e:59:20:6c:66
	I0818 12:39:08.071450    5861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:39:08.071488    5861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:39:08.080328    5861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53671
	I0818 12:39:08.080816    5861 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:39:08.081187    5861 main.go:141] libmachine: Using API Version  1
	I0818 12:39:08.081214    5861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:39:08.081451    5861 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:39:08.081808    5861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:39:08.081858    5861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:39:08.090649    5861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53673
	I0818 12:39:08.091075    5861 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:39:08.091525    5861 main.go:141] libmachine: Using API Version  1
	I0818 12:39:08.091536    5861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:39:08.091799    5861 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:39:08.091926    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .GetState
	I0818 12:39:08.092023    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.092107    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:08.093146    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .DriverName
	I0818 12:39:08.134556    5861 out.go:177] * Deleting "force-systemd-flag-608000" in hyperkit ...
	I0818 12:39:08.176467    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .Remove
	I0818 12:39:08.176592    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.176604    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.176663    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:08.177588    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:08.177655    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | waiting for graceful shutdown
	I0818 12:39:09.179651    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:09.179752    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:09.180663    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | waiting for graceful shutdown
	I0818 12:39:10.181029    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:10.181127    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:10.182831    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | waiting for graceful shutdown
	I0818 12:39:11.184389    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:11.184457    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:11.185157    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | waiting for graceful shutdown
	I0818 12:39:12.187242    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:12.187258    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:12.187818    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | waiting for graceful shutdown
	I0818 12:39:13.189475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:13.189604    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5923
	I0818 12:39:13.190757    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | sending sigkill
	I0818 12:39:13.190768    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:39:13.202123    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:39:13 WARN : hyperkit: failed to read stderr: EOF
	I0818 12:39:13.202147    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:39:13 WARN : hyperkit: failed to read stdout: EOF
	W0818 12:39:13.216857    5861 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:e:59:20:6c:66
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 26:e:59:20:6c:66
	I0818 12:39:13.216878    5861 start.go:729] Will try again in 5 seconds ...
	I0818 12:39:18.218844    5861 start.go:360] acquireMachinesLock for force-systemd-flag-608000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:40:10.947420    5861 start.go:364] duration metric: took 52.729827499s to acquireMachinesLock for "force-systemd-flag-608000"
	I0818 12:40:10.947464    5861 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-608000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-608000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:40:10.947526    5861 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:40:10.969181    5861 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:40:10.969261    5861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:40:10.969276    5861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:40:10.977823    5861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53681
	I0818 12:40:10.978178    5861 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:40:10.978526    5861 main.go:141] libmachine: Using API Version  1
	I0818 12:40:10.978539    5861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:40:10.978765    5861 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:40:10.978894    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .GetMachineName
	I0818 12:40:10.978995    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .DriverName
	I0818 12:40:10.979106    5861 start.go:159] libmachine.API.Create for "force-systemd-flag-608000" (driver="hyperkit")
	I0818 12:40:10.979125    5861 client.go:168] LocalClient.Create starting
	I0818 12:40:10.979155    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:40:10.979218    5861 main.go:141] libmachine: Decoding PEM data...
	I0818 12:40:10.979231    5861 main.go:141] libmachine: Parsing certificate...
	I0818 12:40:10.979271    5861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:40:10.979308    5861 main.go:141] libmachine: Decoding PEM data...
	I0818 12:40:10.979319    5861 main.go:141] libmachine: Parsing certificate...
	I0818 12:40:10.979334    5861 main.go:141] libmachine: Running pre-create checks...
	I0818 12:40:10.979339    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .PreCreateCheck
	I0818 12:40:10.979419    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:10.979446    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .GetConfigRaw
	I0818 12:40:11.013195    5861 main.go:141] libmachine: Creating machine...
	I0818 12:40:11.013221    5861 main.go:141] libmachine: (force-systemd-flag-608000) Calling .Create
	I0818 12:40:11.013303    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.013424    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:40:11.013297    5996 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:40:11.013472    5861 main.go:141] libmachine: (force-systemd-flag-608000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:40:11.222923    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:40:11.222835    5996 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/id_rsa...
	I0818 12:40:11.437776    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:40:11.437717    5996 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk...
	I0818 12:40:11.437793    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Writing magic tar header
	I0818 12:40:11.437819    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Writing SSH key tar header
	I0818 12:40:11.438442    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | I0818 12:40:11.438403    5996 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000 ...
	I0818 12:40:11.815348    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.815366    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid
	I0818 12:40:11.815415    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Using UUID efd3f9cb-fdb0-42e1-9c2b-efbcf7ebcc40
	I0818 12:40:11.840448    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Generated MAC 86:34:92:f9:44:e
	I0818 12:40:11.840467    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000
	I0818 12:40:11.840498    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"efd3f9cb-fdb0-42e1-9c2b-efbcf7ebcc40", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:40:11.840527    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"efd3f9cb-fdb0-42e1-9c2b-efbcf7ebcc40", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:40:11.840583    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "efd3f9cb-fdb0-42e1-9c2b-efbcf7ebcc40", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/fo
rce-systemd-flag-608000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000"}
	I0818 12:40:11.840616    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U efd3f9cb-fdb0-42e1-9c2b-efbcf7ebcc40 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/force-systemd-flag-608000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/bzimage,/Users/jenkins/minikube-integr
ation/19423-1007/.minikube/machines/force-systemd-flag-608000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-608000"
	I0818 12:40:11.840625    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:40:11.843879    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 DEBUG: hyperkit: Pid is 5997
	I0818 12:40:11.844319    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 0
	I0818 12:40:11.844338    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:11.844419    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:11.845318    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:11.845398    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:11.845418    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:11.845452    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:11.845478    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:11.845498    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:11.845512    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:11.845537    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:11.845559    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:11.845579    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:11.845596    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:11.845608    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:11.845621    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:11.845635    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:11.845648    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:11.845660    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:11.845673    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:11.845689    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:11.845702    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:11.851549    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:40:11.860100    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-flag-608000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:40:11.860895    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:40:11.860917    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:40:11.860948    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:40:11.860968    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:40:12.234542    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:40:12.234558    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:40:12.349152    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:40:12.349169    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:40:12.349223    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:40:12.349259    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:40:12.350036    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:40:12.350047    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:40:13.847042    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 1
	I0818 12:40:13.847059    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:13.847166    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:13.847935    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:13.848004    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:13.848012    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:13.848023    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:13.848039    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:13.848052    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:13.848064    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:13.848072    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:13.848085    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:13.848106    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:13.848119    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:13.848133    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:13.848143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:13.848151    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:13.848157    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:13.848164    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:13.848173    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:13.848183    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:13.848198    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:15.849988    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 2
	I0818 12:40:15.850005    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:15.850104    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:15.850953    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:15.851025    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:15.851037    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:15.851046    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:15.851056    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:15.851065    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:15.851071    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:15.851078    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:15.851085    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:15.851093    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:15.851101    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:15.851108    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:15.851115    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:15.851121    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:15.851128    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:15.851135    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:15.851143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:15.851151    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:15.851159    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:17.732693    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:40:17.732900    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:40:17.732912    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:40:17.752660    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | 2024/08/18 12:40:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:40:17.851932    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 3
	I0818 12:40:17.851961    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:17.852119    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:17.853860    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:17.853938    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:17.853952    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:17.853965    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:17.853974    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:17.853995    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:17.854003    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:17.854015    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:17.854027    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:17.854037    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:17.854049    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:17.854072    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:17.854095    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:17.854126    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:17.854142    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:17.854164    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:17.854182    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:17.854195    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:17.854224    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:19.854168    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 4
	I0818 12:40:19.854185    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:19.854272    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:19.855046    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:19.855105    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:19.855115    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:19.855127    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:19.855135    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:19.855143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:19.855151    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:19.855158    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:19.855164    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:19.855170    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:19.855177    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:19.855183    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:19.855191    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:19.855207    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:19.855215    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:19.855223    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:19.855231    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:19.855238    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:19.855245    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:21.857269    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 5
	I0818 12:40:21.857284    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:21.857344    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:21.858118    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:21.858165    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:21.858178    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:21.858188    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:21.858195    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:21.858202    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:21.858209    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:21.858233    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:21.858262    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:21.858274    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:21.858282    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:21.858290    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:21.858297    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:21.858307    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:21.858315    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:21.858323    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:21.858333    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:21.858346    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:21.858364    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:23.859001    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 6
	I0818 12:40:23.859013    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:23.859098    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:23.859851    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:23.859912    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:23.859925    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:23.859936    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:23.859946    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:23.859956    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:23.859984    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:23.859994    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:23.860001    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:23.860012    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:23.860023    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:23.860032    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:23.860039    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:23.860045    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:23.860052    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:23.860066    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:23.860074    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:23.860081    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:23.860094    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:25.862058    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 7
	I0818 12:40:25.862070    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:25.862121    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:25.862927    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:25.862953    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:25.862967    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:25.862975    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:25.862983    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:25.863008    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:25.863021    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:25.863029    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:25.863037    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:25.863045    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:25.863053    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:25.863062    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:25.863069    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:25.863074    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:25.863083    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:25.863092    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:25.863105    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:25.863128    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:25.863143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:27.864080    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 8
	I0818 12:40:27.864095    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:27.864160    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:27.864956    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:27.864998    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:27.865007    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:27.865019    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:27.865027    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:27.865039    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:27.865046    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:27.865052    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:27.865058    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:27.865075    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:27.865086    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:27.865096    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:27.865105    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:27.865113    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:27.865122    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:27.865129    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:27.865135    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:27.865141    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:27.865149    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:29.867189    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 9
	I0818 12:40:29.867202    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:29.867266    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:29.868139    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:29.868153    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:29.868161    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:29.868171    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:29.868180    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:29.868187    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:29.868196    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:29.868205    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:29.868213    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:29.868229    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:29.868237    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:29.868244    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:29.868252    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:29.868259    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:29.868265    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:29.868280    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:29.868294    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:29.868302    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:29.868311    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:31.870246    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 10
	I0818 12:40:31.870260    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:31.870304    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:31.871084    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:31.871139    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:31.871150    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:31.871159    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:31.871168    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:31.871177    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:31.871183    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:31.871191    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:31.871207    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:31.871214    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:31.871220    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:31.871229    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:31.871243    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:31.871255    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:31.871266    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:31.871274    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:31.871290    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:31.871301    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:31.871313    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:33.873307    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 11
	I0818 12:40:33.873325    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:33.873383    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:33.874173    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:33.874212    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:33.874220    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:33.874230    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:33.874238    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:33.874245    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:33.874251    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:33.874257    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:33.874265    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:33.874286    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:33.874295    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:33.874304    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:33.874314    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:33.874329    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:33.874341    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:33.874349    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:33.874358    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:33.874365    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:33.874374    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:35.876341    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 12
	I0818 12:40:35.876357    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:35.876438    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:35.877239    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:35.877275    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:35.877284    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:35.877302    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:35.877308    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:35.877315    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:35.877322    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:35.877336    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:35.877349    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:35.877357    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:35.877365    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:35.877374    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:35.877382    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:35.877396    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:35.877409    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:35.877417    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:35.877426    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:35.877433    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:35.877441    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:37.878166    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 13
	I0818 12:40:37.878182    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:37.878236    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:37.879017    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:37.879081    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:37.879092    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:37.879101    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:37.879108    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:37.879122    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:37.879133    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:37.879143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:37.879152    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:37.879159    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:37.879168    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:37.879175    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:37.879182    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:37.879200    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:37.879212    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:37.879227    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:37.879240    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:37.879249    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:37.879257    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:39.880080    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 14
	I0818 12:40:39.880097    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:39.880148    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:39.880933    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:39.880983    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:39.880992    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:39.881006    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:39.881016    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:39.881024    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:39.881034    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:39.881041    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:39.881048    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:39.881055    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:39.881082    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:39.881117    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:39.881134    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:39.881145    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:39.881154    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:39.881161    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:39.881169    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:39.881178    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:39.881187    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:41.882444    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 15
	I0818 12:40:41.882456    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:41.882536    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:41.883353    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:41.883390    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:41.883404    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:41.883414    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:41.883424    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:41.883431    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:41.883438    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:41.883447    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:41.883453    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:41.883459    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:41.883467    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:41.883475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:41.883488    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:41.883504    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:41.883514    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:41.883521    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:41.883528    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:41.883536    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:41.883545    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:43.885139    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 16
	I0818 12:40:43.885155    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:43.885210    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:43.886214    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:43.886265    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:43.886282    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:43.886295    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:43.886302    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:43.886308    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:43.886316    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:43.886323    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:43.886331    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:43.886339    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:43.886346    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:43.886353    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:43.886362    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:43.886376    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:43.886386    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:43.886394    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:43.886402    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:43.886410    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:43.886417    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:45.886842    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 17
	I0818 12:40:45.886857    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:45.886913    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:45.887726    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:45.887781    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:45.887792    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:45.887801    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:45.887809    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:45.887815    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:45.887821    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:45.887834    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:45.887843    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:45.887852    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:45.887860    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:45.887875    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:45.887884    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:45.887892    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:45.887901    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:45.887915    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:45.887929    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:45.887938    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:45.887946    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:47.889536    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 18
	I0818 12:40:47.889550    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:47.889634    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:47.890445    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:47.890493    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:47.890504    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:47.890513    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:47.890520    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:47.890534    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:47.890546    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:47.890553    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:47.890562    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:47.890569    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:47.890576    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:47.890597    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:47.890618    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:47.890636    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:47.890646    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:47.890661    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:47.890674    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:47.890683    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:47.890691    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:49.892662    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 19
	I0818 12:40:49.892676    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:49.892743    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:49.893560    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:49.893604    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:49.893614    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:49.893625    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:49.893632    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:49.893640    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:49.893646    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:49.893657    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:49.893664    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:49.893671    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:49.893679    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:49.893688    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:49.893698    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:49.893713    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:49.893724    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:49.893733    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:49.893741    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:49.893748    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:49.893756    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:51.895802    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 20
	I0818 12:40:51.895818    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:51.895906    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:51.896706    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:51.896773    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:51.896791    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:51.896807    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:51.896820    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:51.896828    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:51.896837    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:51.896844    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:51.896852    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:51.896859    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:51.896865    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:51.896871    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:51.896878    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:51.896886    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:51.896893    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:51.896902    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:51.896909    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:51.896915    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:51.896932    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:53.898885    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 21
	I0818 12:40:53.898901    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:53.898968    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:53.899791    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:53.899836    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:53.899845    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:53.899874    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:53.899885    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:53.899894    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:53.899914    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:53.899929    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:53.899941    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:53.899956    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:53.899969    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:53.899986    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:53.899997    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:53.900005    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:53.900012    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:53.900019    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:53.900026    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:53.900041    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:53.900049    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:55.900962    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 22
	I0818 12:40:55.900976    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:55.901040    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:55.901833    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:55.901882    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:55.901892    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:55.901902    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:55.901915    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:55.901926    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:55.901935    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:55.901945    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:55.901954    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:55.901966    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:55.901979    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:55.901988    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:55.902002    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:55.902010    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:55.902017    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:55.902030    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:55.902039    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:55.902046    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:55.902054    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:57.902818    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 23
	I0818 12:40:57.902830    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:57.902909    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:57.903861    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:57.903903    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:57.903917    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:57.903924    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:57.903932    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:57.903945    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:57.903955    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:57.903961    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:57.903968    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:57.903977    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:57.903984    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:57.903993    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:57.904000    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:57.904008    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:57.904020    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:57.904031    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:57.904039    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:57.904047    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:57.904056    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:40:59.906060    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 24
	I0818 12:40:59.906074    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:40:59.906143    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:40:59.906957    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:40:59.907021    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:40:59.907034    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:40:59.907042    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:40:59.907048    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:40:59.907066    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:40:59.907086    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:40:59.907095    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:40:59.907103    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:40:59.907111    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:40:59.907118    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:40:59.907127    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:40:59.907134    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:40:59.907142    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:40:59.907151    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:40:59.907158    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:40:59.907166    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:40:59.907180    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:40:59.907188    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:01.909340    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 25
	I0818 12:41:01.909353    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:01.909415    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:41:01.910234    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:41:01.910270    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:01.910278    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:01.910288    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:01.910299    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:01.910314    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:01.910321    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:01.910332    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:01.910345    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:01.910354    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:01.910363    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:01.910370    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:01.910378    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:01.910385    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:01.910390    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:01.910403    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:01.910412    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:01.910419    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:01.910425    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:03.912427    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 26
	I0818 12:41:03.912445    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:03.912478    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:41:03.913294    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:41:03.913345    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:03.913356    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:03.913371    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:03.913377    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:03.913387    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:03.913396    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:03.913403    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:03.913410    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:03.913416    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:03.913423    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:03.913429    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:03.913448    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:03.913458    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:03.913466    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:03.913475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:03.913482    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:03.913491    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:03.913499    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:05.915445    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 27
	I0818 12:41:05.915459    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:05.915516    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:41:05.916377    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:41:05.916424    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:05.916437    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:05.916456    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:05.916464    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:05.916475    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:05.916481    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:05.916487    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:05.916494    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:05.916501    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:05.916508    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:05.916516    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:05.916523    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:05.916529    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:05.916536    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:05.916543    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:05.916551    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:05.916559    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:05.916574    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:07.916660    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 28
	I0818 12:41:07.916672    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:07.917233    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:41:07.917575    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:41:07.917643    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:07.917654    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:07.917671    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:07.917680    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:07.917688    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:07.917697    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:07.917709    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:07.917718    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:07.917729    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:07.917738    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:07.917749    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:07.917760    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:07.917769    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:07.917778    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:07.917838    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:07.917863    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:07.917875    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:07.917883    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:09.918258    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Attempt 29
	I0818 12:41:09.918281    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:41:09.918308    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | hyperkit pid from json: 5997
	I0818 12:41:09.919148    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Searching for 86:34:92:f9:44:e in /var/db/dhcpd_leases ...
	I0818 12:41:09.919205    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:41:09.919222    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:41:09.919245    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:41:09.919259    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:41:09.919276    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:41:09.919288    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:41:09.919297    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:41:09.919306    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:41:09.919313    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:41:09.919322    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:41:09.919336    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:41:09.919345    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:41:09.919353    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:41:09.919361    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:41:09.919369    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:41:09.919376    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:41:09.919390    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:41:09.919400    5861 main.go:141] libmachine: (force-systemd-flag-608000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:41:11.920071    5861 client.go:171] duration metric: took 1m0.942411168s to LocalClient.Create
	I0818 12:41:13.920625    5861 start.go:128] duration metric: took 1m2.974600985s to createHost
	I0818 12:41:13.920639    5861 start.go:83] releasing machines lock for "force-systemd-flag-608000", held for 1m2.974711551s
	W0818 12:41:13.920700    5861 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-608000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:34:92:f9:44:e
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-608000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:34:92:f9:44:e
	I0818 12:41:13.983719    5861 out.go:201] 
	W0818 12:41:14.004774    5861 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:34:92:f9:44:e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:34:92:f9:44:e
	W0818 12:41:14.004789    5861 out.go:270] * 
	* 
	W0818 12:41:14.005403    5861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:41:14.067649    5861 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-608000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-608000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-608000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (178.794337ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-608000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-608000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-18 12:41:14.350751 -0700 PDT m=+3829.804651485
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-608000 -n force-systemd-flag-608000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-608000 -n force-systemd-flag-608000: exit status 7 (80.326641ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:41:14.428999    6025 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:41:14.429022    6025 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-608000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-608000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-608000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-608000: (5.256766766s)
--- FAIL: TestForceSystemdFlag (252.17s)

                                                
                                    
x
+
TestForceSystemdEnv (234.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-184000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0818 12:35:40.693115    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:36:48.104013    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-184000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.627115116s)

                                                
                                                
-- stdout --
	* [force-systemd-env-184000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-184000" primary control-plane node in "force-systemd-env-184000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-184000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:34:16.556979    5757 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:34:16.557244    5757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:34:16.557249    5757 out.go:358] Setting ErrFile to fd 2...
	I0818 12:34:16.557253    5757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:34:16.557422    5757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:34:16.559015    5757 out.go:352] Setting JSON to false
	I0818 12:34:16.581884    5757 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3827,"bootTime":1724005829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:34:16.581984    5757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:34:16.603329    5757 out.go:177] * [force-systemd-env-184000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:34:16.643903    5757 notify.go:220] Checking for updates...
	I0818 12:34:16.664876    5757 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:34:16.687539    5757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:34:16.707842    5757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:34:16.728641    5757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:34:16.749817    5757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:34:16.770818    5757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0818 12:34:16.791976    5757 config.go:182] Loaded profile config "offline-docker-476000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:34:16.792052    5757 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:34:16.820797    5757 out.go:177] * Using the hyperkit driver based on user configuration
	I0818 12:34:16.862557    5757 start.go:297] selected driver: hyperkit
	I0818 12:34:16.862580    5757 start.go:901] validating driver "hyperkit" against <nil>
	I0818 12:34:16.862590    5757 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:34:16.865543    5757 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:34:16.865654    5757 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:34:16.873936    5757 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:34:16.877770    5757 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:34:16.877789    5757 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:34:16.877819    5757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 12:34:16.878048    5757 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 12:34:16.878074    5757 cni.go:84] Creating CNI manager for ""
	I0818 12:34:16.878094    5757 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 12:34:16.878100    5757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 12:34:16.878166    5757 start.go:340] cluster config:
	{Name:force-systemd-env-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:34:16.878249    5757 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:34:16.898819    5757 out.go:177] * Starting "force-systemd-env-184000" primary control-plane node in "force-systemd-env-184000" cluster
	I0818 12:34:16.919789    5757 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:34:16.919813    5757 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:34:16.919826    5757 cache.go:56] Caching tarball of preloaded images
	I0818 12:34:16.919917    5757 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:34:16.919926    5757 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:34:16.919995    5757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/force-systemd-env-184000/config.json ...
	I0818 12:34:16.920012    5757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/force-systemd-env-184000/config.json: {Name:mkb1cc9c8c0f3126caaaeaaa1fcb1391a4ced01b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:34:16.920310    5757 start.go:360] acquireMachinesLock for force-systemd-env-184000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:34:56.295829    5757 start.go:364] duration metric: took 39.376448606s to acquireMachinesLock for "force-systemd-env-184000"
	I0818 12:34:56.295875    5757 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:34:56.295937    5757 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:34:56.317163    5757 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:34:56.317329    5757 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:34:56.317366    5757 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:34:56.326037    5757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53635
	I0818 12:34:56.326493    5757 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:34:56.327117    5757 main.go:141] libmachine: Using API Version  1
	I0818 12:34:56.327127    5757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:34:56.327437    5757 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:34:56.327558    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .GetMachineName
	I0818 12:34:56.327658    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .DriverName
	I0818 12:34:56.327769    5757 start.go:159] libmachine.API.Create for "force-systemd-env-184000" (driver="hyperkit")
	I0818 12:34:56.327793    5757 client.go:168] LocalClient.Create starting
	I0818 12:34:56.327823    5757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:34:56.327872    5757 main.go:141] libmachine: Decoding PEM data...
	I0818 12:34:56.327893    5757 main.go:141] libmachine: Parsing certificate...
	I0818 12:34:56.327957    5757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:34:56.327994    5757 main.go:141] libmachine: Decoding PEM data...
	I0818 12:34:56.328007    5757 main.go:141] libmachine: Parsing certificate...
	I0818 12:34:56.328021    5757 main.go:141] libmachine: Running pre-create checks...
	I0818 12:34:56.328029    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .PreCreateCheck
	I0818 12:34:56.328108    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.328289    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .GetConfigRaw
	I0818 12:34:56.339542    5757 main.go:141] libmachine: Creating machine...
	I0818 12:34:56.339554    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .Create
	I0818 12:34:56.339653    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:56.339766    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:34:56.339638    5780 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:34:56.339825    5757 main.go:141] libmachine: (force-systemd-env-184000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:34:56.544835    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:34:56.544736    5780 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/id_rsa...
	I0818 12:34:56.702955    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:34:56.702881    5780 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk...
	I0818 12:34:56.702965    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Writing magic tar header
	I0818 12:34:56.702977    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Writing SSH key tar header
	I0818 12:34:56.703517    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:34:56.703479    5780 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000 ...
	I0818 12:34:57.078280    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:57.078299    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid
	I0818 12:34:57.078330    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Using UUID 161a3d8f-0e25-404b-8a63-a3f3e49e4869
	I0818 12:34:57.104636    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Generated MAC 36:db:fe:64:56:2b
	I0818 12:34:57.104718    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000
	I0818 12:34:57.104759    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"161a3d8f-0e25-404b-8a63-a3f3e49e4869", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00060c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:34:57.104789    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"161a3d8f-0e25-404b-8a63-a3f3e49e4869", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00060c1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:34:57.104858    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "161a3d8f-0e25-404b-8a63-a3f3e49e4869", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-sys
temd-env-184000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000"}
	I0818 12:34:57.104895    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 161a3d8f-0e25-404b-8a63-a3f3e49e4869 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage,/Users/jenkins/minikube-integration/19
423-1007/.minikube/machines/force-systemd-env-184000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000"
	I0818 12:34:57.104911    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:34:57.107703    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 DEBUG: hyperkit: Pid is 5782
	I0818 12:34:57.108271    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 0
	I0818 12:34:57.108288    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:57.108374    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:34:57.109326    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:34:57.109364    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:57.109390    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:57.109424    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:57.109451    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:57.109463    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:57.109478    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:57.109490    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:57.109517    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:57.109605    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:57.109639    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:57.109648    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:57.109654    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:57.109672    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:57.109693    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:57.109708    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:57.109723    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:57.109737    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:57.109755    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:34:57.115850    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:34:57.124002    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:34:57.124866    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:34:57.124892    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:34:57.124906    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:34:57.124917    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:34:57.497101    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:34:57.497123    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:34:57.611694    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:34:57.611710    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:34:57.611722    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:34:57.611737    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:34:57.612599    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:34:57.612611    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:34:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:34:59.110464    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 1
	I0818 12:34:59.110481    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:34:59.110603    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:34:59.111374    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:34:59.111430    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:34:59.111438    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:34:59.111446    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:34:59.111456    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:34:59.111469    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:34:59.111482    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:34:59.111514    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:34:59.111530    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:34:59.111544    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:34:59.111552    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:34:59.111559    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:34:59.111570    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:34:59.111577    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:34:59.111586    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:34:59.111593    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:34:59.111602    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:34:59.111612    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:34:59.111621    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:01.112625    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 2
	I0818 12:35:01.112643    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:01.112727    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:01.113651    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:01.113706    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:01.113716    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:01.113729    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:01.113739    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:01.113746    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:01.113753    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:01.113763    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:01.113770    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:01.113778    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:01.113785    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:01.113791    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:01.113803    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:01.113811    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:01.113817    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:01.113825    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:01.113833    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:01.113841    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:01.113851    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:03.002639    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:35:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:35:03.002757    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:35:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:35:03.002766    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:35:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:35:03.023265    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:35:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:35:03.115483    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 3
	I0818 12:35:03.115511    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:03.115627    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:03.117082    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:03.117199    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:03.117220    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:03.117237    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:03.117248    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:03.117276    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:03.117299    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:03.117334    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:03.117373    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:03.117398    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:03.117416    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:03.117427    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:03.117438    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:03.117468    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:03.117496    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:03.117506    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:03.117520    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:03.117531    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:03.117541    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:05.117876    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 4
	I0818 12:35:05.117893    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:05.118011    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:05.118831    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:05.118895    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:05.118919    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:05.118930    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:05.118946    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:05.118969    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:05.118985    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:05.118994    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:05.119008    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:05.119017    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:05.119025    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:05.119034    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:05.119040    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:05.119047    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:05.119061    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:05.119068    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:05.119075    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:05.119094    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:05.119108    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:07.119123    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 5
	I0818 12:35:07.119141    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:07.119202    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:07.119993    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:07.120051    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:07.120066    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:07.120092    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:07.120105    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:07.120117    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:07.120130    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:07.120139    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:07.120145    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:07.120158    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:07.120169    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:07.120177    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:07.120185    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:07.120192    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:07.120200    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:07.120208    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:07.120216    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:07.120224    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:07.120232    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:09.120425    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 6
	I0818 12:35:09.120441    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:09.120507    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:09.121304    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:09.121345    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:09.121356    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:09.121366    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:09.121372    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:09.121379    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:09.121384    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:09.121392    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:09.121400    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:09.121406    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:09.121413    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:09.121429    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:09.121446    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:09.121459    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:09.121467    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:09.121475    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:09.121499    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:09.121510    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:09.121525    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:11.123486    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 7
	I0818 12:35:11.123499    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:11.123568    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:11.124351    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:11.124401    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:11.124413    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:11.124444    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:11.124455    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:11.124466    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:11.124476    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:11.124483    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:11.124491    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:11.124499    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:11.124505    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:11.124512    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:11.124521    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:11.124528    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:11.124536    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:11.124544    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:11.124566    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:11.124572    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:11.124582    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:13.124647    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 8
	I0818 12:35:13.124662    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:13.124710    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:13.125512    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:13.125564    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:13.125575    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:13.125585    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:13.125601    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:13.125617    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:13.125631    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:13.125650    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:13.125668    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:13.125676    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:13.125689    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:13.125701    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:13.125711    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:13.125719    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:13.125725    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:13.125736    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:13.125752    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:13.125762    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:13.125772    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:15.126507    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 9
	I0818 12:35:15.126522    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:15.126532    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:15.127294    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:15.127362    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:15.127373    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:15.127390    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:15.127406    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:15.127419    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:15.127429    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:15.127437    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:15.127445    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:15.127452    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:15.127460    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:15.127477    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:15.127488    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:15.127503    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:15.127512    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:15.127520    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:15.127528    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:15.127537    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:15.127545    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:17.128431    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 10
	I0818 12:35:17.128448    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:17.128502    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:17.129271    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:17.129316    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:17.129327    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:17.129335    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:17.129342    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:17.129349    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:17.129355    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:17.129361    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:17.129368    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:17.129373    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:17.129399    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:17.129415    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:17.129422    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:17.129429    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:17.129441    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:17.129453    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:17.129462    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:17.129470    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:17.129485    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:19.130362    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 11
	I0818 12:35:19.130384    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:19.130439    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:19.131260    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:19.131306    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:19.131317    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:19.131325    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:19.131336    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:19.131345    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:19.131351    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:19.131357    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:19.131363    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:19.131370    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:19.131378    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:19.131388    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:19.131393    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:19.131410    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:19.131424    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:19.131431    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:19.131441    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:19.131449    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:19.131458    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:21.131966    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 12
	I0818 12:35:21.131977    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:21.132038    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:21.132776    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:21.132828    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:21.132839    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:21.132870    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:21.132880    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:21.132887    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:21.132894    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:21.132903    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:21.132910    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:21.132918    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:21.132925    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:21.132931    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:21.132938    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:21.132947    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:21.132955    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:21.132962    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:21.132970    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:21.132977    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:21.132985    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:23.133140    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 13
	I0818 12:35:23.133154    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:23.133225    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:23.134050    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:23.134093    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:23.134109    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:23.134131    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:23.134138    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:23.134145    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:23.134151    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:23.134165    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:23.134180    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:23.134198    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:23.134211    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:23.134221    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:23.134230    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:23.134238    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:23.134246    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:23.134254    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:23.134260    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:23.134270    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:23.134277    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:25.136263    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 14
	I0818 12:35:25.136279    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:25.136343    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:25.137247    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:25.137268    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:25.137300    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:25.137309    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:25.137330    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:25.137344    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:25.137354    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:25.137362    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:25.137369    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:25.137381    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:25.137400    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:25.137414    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:25.137431    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:25.137441    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:25.137448    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:25.137457    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:25.137470    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:25.137483    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:25.137493    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:27.139441    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 15
	I0818 12:35:27.139455    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:27.139498    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:27.140264    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:27.140318    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:27.140329    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:27.140338    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:27.140354    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:27.140363    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:27.140368    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:27.140376    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:27.140383    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:27.140389    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:27.140397    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:27.140411    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:27.140422    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:27.140430    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:27.140439    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:27.140456    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:27.140464    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:27.140475    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:27.140484    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:29.140861    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 16
	I0818 12:35:29.140873    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:29.140930    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:29.141710    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:29.141767    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:29.141780    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:29.141790    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:29.141797    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:29.141809    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:29.141822    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:29.141829    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:29.141839    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:29.141847    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:29.141853    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:29.141863    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:29.141871    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:29.141878    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:29.141886    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:29.141894    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:29.141903    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:29.141918    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:29.141930    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:31.142726    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 17
	I0818 12:35:31.142742    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:31.142812    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:31.143572    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:31.143624    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:31.143636    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:31.143647    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:31.143658    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:31.143671    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:31.143681    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:31.143689    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:31.143694    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:31.143704    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:31.143724    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:31.143736    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:31.143749    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:31.143764    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:31.143774    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:31.143787    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:31.143794    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:31.143803    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:31.143820    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:33.144854    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 18
	I0818 12:35:33.144870    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:33.144921    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:33.145722    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:33.145772    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:33.145785    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:33.145794    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:33.145800    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:33.145806    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:33.145820    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:33.145836    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:33.145843    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:33.145850    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:33.145856    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:33.145864    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:33.145872    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:33.145882    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:33.145899    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:33.145907    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:33.145914    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:33.145922    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:33.145938    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:35.145994    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 19
	I0818 12:35:35.146007    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:35.146074    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:35.146934    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:35.146943    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:35.146952    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:35.146958    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:35.146965    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:35.146970    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:35.146987    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:35.146999    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:35.147020    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:35.147034    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:35.147042    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:35.147050    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:35.147057    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:35.147065    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:35.147075    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:35.147082    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:35.147097    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:35.147112    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:35.147121    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:37.149048    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 20
	I0818 12:35:37.149062    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:37.149106    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:37.149881    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:37.149922    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:37.149935    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:37.149949    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:37.149956    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:37.149964    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:37.149970    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:37.149978    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:37.149996    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:37.150012    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:37.150024    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:37.150043    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:37.150058    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:37.150066    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:37.150074    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:37.150092    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:37.150100    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:37.150108    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:37.150113    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:39.150599    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 21
	I0818 12:35:39.150614    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:39.150681    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:39.151492    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:39.151547    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:39.151570    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:39.151602    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:39.151613    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:39.151622    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:39.151630    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:39.151637    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:39.151643    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:39.151659    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:39.151671    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:39.151680    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:39.151688    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:39.151698    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:39.151706    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:39.151712    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:39.151727    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:39.151735    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:39.151742    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:41.153713    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 22
	I0818 12:35:41.153727    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:41.153785    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:41.154565    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:41.154639    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:41.154649    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:41.154656    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:41.154664    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:41.154671    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:41.154690    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:41.154699    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:41.154709    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:41.154717    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:41.154724    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:41.154731    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:41.154740    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:41.154748    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:41.154756    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:41.154763    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:41.154770    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:41.154777    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:41.154785    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:43.156758    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 23
	I0818 12:35:43.156774    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:43.156826    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:43.157591    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:43.157654    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:43.157665    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:43.157673    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:43.157679    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:43.157687    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:43.157696    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:43.157705    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:43.157712    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:43.157720    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:43.157726    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:43.157732    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:43.157739    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:43.157745    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:43.157759    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:43.157772    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:43.157780    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:43.157789    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:43.157805    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:45.158067    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 24
	I0818 12:35:45.158078    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:45.158162    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:45.158947    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:45.158974    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:45.158982    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:45.158993    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:45.159003    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:45.159009    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:45.159033    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:45.159051    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:45.159065    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:45.159074    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:45.159081    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:45.159097    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:45.159109    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:45.159127    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:45.159138    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:45.159146    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:45.159155    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:45.159161    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:45.159168    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:47.159716    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 25
	I0818 12:35:47.159738    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:47.159802    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:47.160642    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:47.160681    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:47.160701    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:47.160718    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:47.160736    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:47.160749    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:47.160757    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:47.160773    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:47.160790    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:47.160803    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:47.160823    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:47.160836    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:47.160855    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:47.160869    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:47.160879    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:47.160887    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:47.160901    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:47.160908    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:47.160922    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:49.162190    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 26
	I0818 12:35:49.162202    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:49.162328    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:49.163302    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:49.163346    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:49.163359    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:49.163369    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:49.163376    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:49.163384    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:49.163390    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:49.163413    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:49.163443    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:49.163457    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:49.163467    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:49.163473    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:49.163480    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:49.163493    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:49.163504    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:49.163520    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:49.163532    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:49.163550    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:49.163560    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:51.165407    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 27
	I0818 12:35:51.165421    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:51.165467    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:51.166253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:51.166283    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:51.166291    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:51.166299    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:51.166306    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:51.166323    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:51.166339    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:51.166357    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:51.166366    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:51.166396    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:51.166408    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:51.166417    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:51.166427    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:51.166434    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:51.166443    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:51.166463    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:51.166472    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:51.166483    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:51.166493    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:53.166572    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 28
	I0818 12:35:53.166586    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:53.166701    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:53.167512    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:53.167534    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:53.167550    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:53.167559    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:53.167567    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:53.167584    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:53.167598    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:53.167607    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:53.167617    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:53.167627    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:53.167635    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:53.167643    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:53.167651    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:53.167658    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:53.167665    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:53.167672    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:53.167679    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:53.167686    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:53.167695    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:55.169681    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 29
	I0818 12:35:55.169697    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:55.169755    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:55.170657    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 36:db:fe:64:56:2b in /var/db/dhcpd_leases ...
	I0818 12:35:55.170680    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:35:55.170688    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:35:55.170706    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:35:55.170715    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:35:55.170722    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:35:55.170729    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:35:55.170736    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:35:55.170747    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:35:55.170755    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:35:55.170763    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:35:55.170771    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:35:55.170778    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:35:55.170786    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:35:55.170793    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:35:55.170801    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:35:55.170808    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:35:55.170816    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:35:55.170824    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:35:57.171369    5757 client.go:171] duration metric: took 1m0.845028756s to LocalClient.Create
	I0818 12:35:59.173001    5757 start.go:128] duration metric: took 1m2.878566215s to createHost
	I0818 12:35:59.173014    5757 start.go:83] releasing machines lock for "force-systemd-env-184000", held for 1m2.878690942s
	W0818 12:35:59.173051    5757 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:db:fe:64:56:2b
	I0818 12:35:59.173372    5757 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:35:59.173402    5757 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:35:59.182062    5757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53637
	I0818 12:35:59.182414    5757 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:35:59.182729    5757 main.go:141] libmachine: Using API Version  1
	I0818 12:35:59.182743    5757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:35:59.182940    5757 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:35:59.183288    5757 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:35:59.183309    5757 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:35:59.191697    5757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53639
	I0818 12:35:59.192046    5757 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:35:59.192410    5757 main.go:141] libmachine: Using API Version  1
	I0818 12:35:59.192432    5757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:35:59.192663    5757 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:35:59.192820    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .GetState
	I0818 12:35:59.192914    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.192986    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:59.193922    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .DriverName
	I0818 12:35:59.236210    5757 out.go:177] * Deleting "force-systemd-env-184000" in hyperkit ...
	I0818 12:35:59.257242    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .Remove
	I0818 12:35:59.257357    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.257366    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.257430    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:35:59.258365    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:35:59.258407    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | waiting for graceful shutdown
	I0818 12:36:00.259772    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:00.259927    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:36:00.260862    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | waiting for graceful shutdown
	I0818 12:36:01.262548    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:01.262654    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:36:01.264191    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | waiting for graceful shutdown
	I0818 12:36:02.264417    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:02.264523    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:36:02.265157    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | waiting for graceful shutdown
	I0818 12:36:03.266848    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:03.266925    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:36:03.267508    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | waiting for graceful shutdown
	I0818 12:36:04.269273    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:36:04.269363    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5782
	I0818 12:36:04.270305    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | sending sigkill
	I0818 12:36:04.270314    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0818 12:36:04.281961    5757 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:db:fe:64:56:2b
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:db:fe:64:56:2b
	I0818 12:36:04.281975    5757 start.go:729] Will try again in 5 seconds ...
	I0818 12:36:04.292112    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:36:04 WARN : hyperkit: failed to read stderr: EOF
	I0818 12:36:04.292129    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:36:04 WARN : hyperkit: failed to read stdout: EOF
	I0818 12:36:09.282344    5757 start.go:360] acquireMachinesLock for force-systemd-env-184000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:37:01.952557    5757 start.go:364] duration metric: took 52.671447348s to acquireMachinesLock for "force-systemd-env-184000"
	I0818 12:37:01.952598    5757 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:37:01.952650    5757 start.go:125] createHost starting for "" (driver="hyperkit")
	I0818 12:37:01.973848    5757 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0818 12:37:01.973921    5757 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:37:01.973953    5757 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:37:01.982791    5757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53643
	I0818 12:37:01.983208    5757 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:37:01.983561    5757 main.go:141] libmachine: Using API Version  1
	I0818 12:37:01.983582    5757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:37:01.983794    5757 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:37:01.983920    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .GetMachineName
	I0818 12:37:01.984018    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .DriverName
	I0818 12:37:01.984129    5757 start.go:159] libmachine.API.Create for "force-systemd-env-184000" (driver="hyperkit")
	I0818 12:37:01.984149    5757 client.go:168] LocalClient.Create starting
	I0818 12:37:01.984176    5757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem
	I0818 12:37:01.984227    5757 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:01.984241    5757 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:01.984280    5757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem
	I0818 12:37:01.984318    5757 main.go:141] libmachine: Decoding PEM data...
	I0818 12:37:01.984326    5757 main.go:141] libmachine: Parsing certificate...
	I0818 12:37:01.984338    5757 main.go:141] libmachine: Running pre-create checks...
	I0818 12:37:01.984344    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .PreCreateCheck
	I0818 12:37:01.984424    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:01.984457    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .GetConfigRaw
	I0818 12:37:01.994927    5757 main.go:141] libmachine: Creating machine...
	I0818 12:37:01.994937    5757 main.go:141] libmachine: (force-systemd-env-184000) Calling .Create
	I0818 12:37:01.995034    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:01.995186    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:37:01.995041    5847 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:37:01.995248    5757 main.go:141] libmachine: (force-systemd-env-184000) Downloading /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 12:37:02.322839    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:37:02.322769    5847 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/id_rsa...
	I0818 12:37:02.464026    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:37:02.463936    5847 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk...
	I0818 12:37:02.464044    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Writing magic tar header
	I0818 12:37:02.464057    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Writing SSH key tar header
	I0818 12:37:02.464418    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | I0818 12:37:02.464377    5847 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000 ...
	I0818 12:37:02.839794    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:02.839818    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid
	I0818 12:37:02.839829    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Using UUID 12e1d286-502f-428e-b6e8-a1542dfbb167
	I0818 12:37:02.865533    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Generated MAC 7e:da:96:2e:b5:d1
	I0818 12:37:02.865549    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000
	I0818 12:37:02.865591    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"12e1d286-502f-428e-b6e8-a1542dfbb167", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:37:02.865643    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"12e1d286-502f-428e-b6e8-a1542dfbb167", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:37:02.865693    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "12e1d286-502f-428e-b6e8-a1542dfbb167", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-sys
temd-env-184000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000"}
	I0818 12:37:02.865740    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 12e1d286-502f-428e-b6e8-a1542dfbb167 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/force-systemd-env-184000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/bzimage,/Users/jenkins/minikube-integration/19
423-1007/.minikube/machines/force-systemd-env-184000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-184000"
	I0818 12:37:02.865757    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:37:02.868696    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 DEBUG: hyperkit: Pid is 5858
	I0818 12:37:02.869231    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 0
	I0818 12:37:02.869246    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:02.869307    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:02.870230    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:02.870291    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:02.870305    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:02.870403    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:02.870438    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:02.870452    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:02.870464    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:02.870478    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:02.870490    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:02.870501    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:02.870524    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:02.870535    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:02.870543    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:02.870553    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:02.870559    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:02.870575    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:02.870589    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:02.870617    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:02.870640    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:02.876209    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:37:02.884423    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/force-systemd-env-184000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:37:02.885325    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:37:02.885346    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:37:02.885360    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:37:02.885374    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:37:03.261158    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:37:03.261175    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:37:03.375781    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:37:03.375800    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:37:03.375813    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:37:03.375840    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:37:03.376690    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:37:03.376702    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:37:04.870787    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 1
	I0818 12:37:04.870804    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:04.870875    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:04.871668    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:04.871722    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:04.871736    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:04.871746    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:04.871755    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:04.871764    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:04.871775    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:04.871784    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:04.871793    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:04.871799    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:04.871807    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:04.871814    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:04.871823    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:04.871831    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:04.871849    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:04.871865    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:04.871880    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:04.871888    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:04.871897    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:06.872237    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 2
	I0818 12:37:06.872253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:06.872330    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:06.873203    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:06.873253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:06.873263    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:06.873274    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:06.873281    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:06.873289    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:06.873298    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:06.873304    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:06.873310    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:06.873317    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:06.873326    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:06.873335    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:06.873344    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:06.873351    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:06.873359    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:06.873379    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:06.873391    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:06.873401    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:06.873415    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:08.752598    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0818 12:37:08.752751    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0818 12:37:08.752761    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0818 12:37:08.772407    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | 2024/08/18 12:37:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0818 12:37:08.875512    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 3
	I0818 12:37:08.875554    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:08.875723    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:08.877195    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:08.877309    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:08.877329    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:08.877344    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:08.877356    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:08.877371    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:08.877383    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:08.877395    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:08.877411    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:08.877426    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:08.877440    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:08.877453    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:08.877468    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:08.877497    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:08.877530    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:08.877568    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:08.877585    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:08.877608    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:08.877616    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:10.877544    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 4
	I0818 12:37:10.877562    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:10.877645    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:10.878443    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:10.878501    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:10.878510    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:10.878526    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:10.878539    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:10.878554    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:10.878580    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:10.878605    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:10.878617    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:10.878625    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:10.878634    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:10.878646    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:10.878655    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:10.878662    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:10.878668    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:10.878679    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:10.878690    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:10.878697    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:10.878704    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:12.878893    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 5
	I0818 12:37:12.878908    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:12.878954    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:12.879774    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:12.879832    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:12.879841    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:12.879851    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:12.879874    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:12.879884    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:12.879890    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:12.879898    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:12.879906    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:12.879913    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:12.879920    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:12.879939    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:12.879952    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:12.879964    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:12.879973    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:12.879981    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:12.879989    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:12.879997    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:12.880004    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:14.880624    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 6
	I0818 12:37:14.880641    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:14.880737    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:14.881542    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:14.881589    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:14.881602    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:14.881612    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:14.881629    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:14.881638    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:14.881645    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:14.881668    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:14.881678    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:14.881687    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:14.881704    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:14.881719    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:14.881730    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:14.881738    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:14.881746    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:14.881752    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:14.881759    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:14.881767    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:14.881776    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:16.882605    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 7
	I0818 12:37:16.882624    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:16.882695    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:16.883431    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:16.883467    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:16.883477    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:16.883485    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:16.883494    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:16.883513    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:16.883527    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:16.883533    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:16.883543    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:16.883553    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:16.883570    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:16.883584    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:16.883592    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:16.883601    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:16.883617    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:16.883630    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:16.883638    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:16.883645    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:16.883654    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:18.885583    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 8
	I0818 12:37:18.885600    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:18.885657    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:18.886431    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:18.886483    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:18.886493    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:18.886502    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:18.886510    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:18.886539    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:18.886552    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:18.886562    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:18.886575    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:18.886583    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:18.886591    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:18.886605    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:18.886619    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:18.886627    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:18.886636    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:18.886643    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:18.886651    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:18.886658    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:18.886664    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:20.887835    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 9
	I0818 12:37:20.887852    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:20.887903    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:20.888683    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:20.888712    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:20.888719    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:20.888739    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:20.888750    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:20.888762    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:20.888775    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:20.888787    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:20.888795    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:20.888806    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:20.888814    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:20.888826    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:20.888835    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:20.888845    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:20.888855    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:20.888870    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:20.888880    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:20.888887    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:20.888896    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:22.889614    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 10
	I0818 12:37:22.889628    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:22.889691    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:22.890449    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:22.890510    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:22.890523    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:22.890533    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:22.890539    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:22.890553    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:22.890565    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:22.890574    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:22.890580    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:22.890596    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:22.890604    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:22.890613    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:22.890621    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:22.890629    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:22.890637    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:22.890644    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:22.890651    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:22.890656    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:22.890669    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:24.892089    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 11
	I0818 12:37:24.892105    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:24.892170    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:24.893013    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:24.893057    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:24.893066    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:24.893094    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:24.893113    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:24.893128    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:24.893142    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:24.893148    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:24.893160    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:24.893168    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:24.893176    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:24.893182    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:24.893189    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:24.893197    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:24.893209    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:24.893220    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:24.893231    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:24.893240    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:24.893248    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:26.895207    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 12
	I0818 12:37:26.895223    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:26.895285    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:26.896078    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:26.896126    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:26.896138    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:26.896158    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:26.896165    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:26.896174    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:26.896183    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:26.896190    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:26.896198    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:26.896206    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:26.896217    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:26.896224    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:26.896232    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:26.896245    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:26.896255    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:26.896264    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:26.896273    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:26.896280    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:26.896299    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:28.898253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 13
	I0818 12:37:28.898270    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:28.898341    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:28.899121    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:28.899156    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:28.899164    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:28.899172    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:28.899196    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:28.899207    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:28.899221    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:28.899231    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:28.899238    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:28.899246    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:28.899253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:28.899261    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:28.899269    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:28.899277    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:28.899284    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:28.899291    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:28.899305    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:28.899313    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:28.899333    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:30.900085    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 14
	I0818 12:37:30.900102    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:30.900205    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:30.900986    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:30.901039    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:30.901052    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:30.901061    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:30.901068    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:30.901076    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:30.901084    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:30.901091    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:30.901105    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:30.901112    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:30.901136    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:30.901149    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:30.901158    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:30.901165    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:30.901174    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:30.901190    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:30.901202    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:30.901212    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:30.901220    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:32.901740    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 15
	I0818 12:37:32.901754    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:32.901805    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:32.902561    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:32.902597    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:32.902610    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:32.902633    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:32.902641    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:32.902649    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:32.902659    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:32.902676    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:32.902689    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:32.902697    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:32.902705    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:32.902719    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:32.902730    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:32.902739    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:32.902747    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:32.902754    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:32.902762    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:32.902777    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:32.902787    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:34.904746    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 16
	I0818 12:37:34.904763    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:34.904798    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:34.905542    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:34.905596    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:34.905611    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:34.905628    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:34.905640    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:34.905650    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:34.905660    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:34.905675    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:34.905683    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:34.905691    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:34.905700    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:34.905708    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:34.905715    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:34.905723    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:34.905730    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:34.905737    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:34.905745    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:34.905753    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:34.905762    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:36.906118    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 17
	I0818 12:37:36.906131    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:36.906175    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:36.906957    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:36.907013    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:36.907023    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:36.907034    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:36.907040    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:36.907066    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:36.907082    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:36.907093    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:36.907101    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:36.907109    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:36.907115    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:36.907129    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:36.907142    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:36.907159    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:36.907171    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:36.907187    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:36.907199    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:36.907216    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:36.907230    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:38.908165    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 18
	I0818 12:37:38.908182    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:38.908262    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:38.909026    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:38.909077    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:38.909086    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:38.909094    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:38.909100    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:38.909125    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:38.909157    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:38.909173    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:38.909185    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:38.909192    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:38.909200    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:38.909208    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:38.909216    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:38.909223    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:38.909231    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:38.909238    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:38.909244    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:38.909260    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:38.909269    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:40.911221    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 19
	I0818 12:37:40.911237    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:40.911316    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:40.912347    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:40.912405    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:40.912416    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:40.912423    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:40.912432    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:40.912440    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:40.912456    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:40.912469    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:40.912480    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:40.912493    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:40.912502    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:40.912517    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:40.912529    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:40.912537    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:40.912546    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:40.912552    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:40.912561    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:40.912574    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:40.912587    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:42.914543    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 20
	I0818 12:37:42.914557    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:42.914605    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:42.915416    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:42.915453    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:42.915463    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:42.915481    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:42.915492    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:42.915500    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:42.915507    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:42.915514    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:42.915521    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:42.915528    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:42.915537    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:42.915555    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:42.915563    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:42.915571    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:42.915579    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:42.915588    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:42.915596    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:42.915603    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:42.915611    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:44.916419    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 21
	I0818 12:37:44.916432    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:44.916497    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:44.917317    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:44.917364    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:44.917375    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:44.917383    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:44.917390    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:44.917399    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:44.917409    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:44.917425    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:44.917437    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:44.917445    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:44.917467    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:44.917487    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:44.917499    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:44.917508    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:44.917516    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:44.917527    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:44.917537    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:44.917546    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:44.917553    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:46.918645    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 22
	I0818 12:37:46.918663    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:46.918704    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:46.919487    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:46.919528    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:46.919537    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:46.919545    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:46.919553    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:46.919567    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:46.919577    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:46.919587    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:46.919595    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:46.919614    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:46.919622    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:46.919630    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:46.919638    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:46.919646    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:46.919654    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:46.919672    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:46.919684    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:46.919698    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:46.919709    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:48.921052    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 23
	I0818 12:37:48.921072    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:48.921161    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:48.922010    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:48.922052    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:48.922062    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:48.922077    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:48.922085    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:48.922093    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:48.922099    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:48.922106    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:48.922115    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:48.922131    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:48.922145    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:48.922155    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:48.922166    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:48.922182    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:48.922203    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:48.922218    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:48.922231    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:48.922244    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:48.922253    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:50.922900    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 24
	I0818 12:37:50.922916    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:50.922965    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:50.923774    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:50.923813    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:50.923825    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:50.923846    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:50.923858    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:50.923866    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:50.923875    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:50.923887    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:50.923896    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:50.923903    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:50.923909    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:50.923924    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:50.923950    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:50.923958    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:50.923971    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:50.923980    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:50.923988    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:50.923996    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:50.924004    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:52.924273    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 25
	I0818 12:37:52.924288    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:52.924346    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:52.925099    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:52.925153    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:52.925166    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:52.925182    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:52.925195    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:52.925213    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:52.925223    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:52.925230    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:52.925243    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:52.925251    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:52.925260    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:52.925284    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:52.925298    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:52.925308    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:52.925318    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:52.925327    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:52.925335    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:52.925342    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:52.925350    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:54.926023    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 26
	I0818 12:37:54.926038    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:54.926115    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:54.926883    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:54.926935    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:54.926945    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:54.926957    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:54.926968    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:54.926977    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:54.926985    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:54.926999    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:54.927008    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:54.927017    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:54.927026    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:54.927036    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:54.927046    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:54.927070    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:54.927079    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:54.927088    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:54.927094    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:54.927109    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:54.927121    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:56.928876    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 27
	I0818 12:37:56.928889    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:56.928995    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:56.929739    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:56.929784    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:56.929793    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:56.929802    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:56.929808    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:56.929817    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:56.929824    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:56.929832    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:56.929839    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:56.929845    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:56.929851    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:56.929866    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:56.929880    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:56.929892    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:56.929901    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:56.929907    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:56.929914    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:56.929921    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:56.929947    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:37:58.930527    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 28
	I0818 12:37:58.930541    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:37:58.930630    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:37:58.931373    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:37:58.931425    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:37:58.931434    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:37:58.931444    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:37:58.931453    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:37:58.931463    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:37:58.931473    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:37:58.931480    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:37:58.931487    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:37:58.931495    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:37:58.931503    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:37:58.931510    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:37:58.931530    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:37:58.931539    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:37:58.931545    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:37:58.931552    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:37:58.931558    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:37:58.931567    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:37:58.931588    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:00.932989    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Attempt 29
	I0818 12:38:00.933006    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:38:00.933080    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | hyperkit pid from json: 5858
	I0818 12:38:00.934020    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Searching for 7e:da:96:2e:b5:d1 in /var/db/dhcpd_leases ...
	I0818 12:38:00.934091    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0818 12:38:00.934101    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:2c:db:9e:9d:78 ID:1,3a:2c:db:9e:9d:78 Lease:0x66c39dbb}
	I0818 12:38:00.934110    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:42:d7:7:52:f5:2e ID:1,42:d7:7:52:f5:2e Lease:0x66c39cfb}
	I0818 12:38:00.934120    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:f2:76:5c:94:4a ID:1,a2:f2:76:5c:94:4a Lease:0x66c39c5d}
	I0818 12:38:00.934129    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:2a:73:21:e3:ad:d5 ID:1,2a:73:21:e3:ad:d5 Lease:0x66c24a3c}
	I0818 12:38:00.934138    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:32:ce:3:a3:d:5f ID:1,32:ce:3:a3:d:5f Lease:0x66c39c1a}
	I0818 12:38:00.934151    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:2e:a6:97:d0:a2:e4 ID:1,2e:a6:97:d0:a2:e4 Lease:0x66c39bd8}
	I0818 12:38:00.934160    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ce:aa:f0:28:ca:b ID:1,ce:aa:f0:28:ca:b Lease:0x66c2480e}
	I0818 12:38:00.934167    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:ce:fa:df:87:2f:9d ID:1,ce:fa:df:87:2f:9d Lease:0x66c39946}
	I0818 12:38:00.934175    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:9a:35:19:43:86:bc ID:1,9a:35:19:43:86:bc Lease:0x66c3990a}
	I0818 12:38:00.934182    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:e2:63:8e:15:ce:a2 ID:1,e2:63:8e:15:ce:a2 Lease:0x66c398dd}
	I0818 12:38:00.934188    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:38:00.934202    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:38:00.934217    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39868}
	I0818 12:38:00.934226    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:38:00.934234    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:46:bc:e4:8e:57:d1 ID:1,46:bc:e4:8e:57:d1 Lease:0x66c394e5}
	I0818 12:38:00.934241    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:f6:f8:2a:37:9:b3 ID:1,f6:f8:2a:37:9:b3 Lease:0x66c39348}
	I0818 12:38:00.934248    5757 main.go:141] libmachine: (force-systemd-env-184000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:42:f:73:12:11:a3 ID:1,42:f:73:12:11:a3 Lease:0x66c39119}
	I0818 12:38:02.936259    5757 client.go:171] duration metric: took 1m0.953573715s to LocalClient.Create
	I0818 12:38:04.938219    5757 start.go:128] duration metric: took 1m2.987076431s to createHost
	I0818 12:38:04.938233    5757 start.go:83] releasing machines lock for "force-systemd-env-184000", held for 1m2.987176521s
	W0818 12:38:04.938302    5757 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-184000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:96:2e:b5:d1
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-184000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:96:2e:b5:d1
	I0818 12:38:05.000466    5757 out.go:201] 
	W0818 12:38:05.021463    5757 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:96:2e:b5:d1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:96:2e:b5:d1
	W0818 12:38:05.021475    5757 out.go:270] * 
	* 
	W0818 12:38:05.022138    5757 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:38:05.083419    5757 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-184000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-184000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-184000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (184.522415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-184000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-184000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-18 12:38:05.375339 -0700 PDT m=+3640.824671297
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-184000 -n force-systemd-env-184000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-184000 -n force-systemd-env-184000: exit status 7 (80.297414ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:38:05.453753    5914 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:38:05.453775    5914 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-184000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-184000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-184000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-184000: (5.246554131s)
--- FAIL: TestForceSystemdEnv (234.20s)

                                                
                                    
x
+
TestErrorSpam/setup (76.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-719000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p nospam-719000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 --driver=hyperkit : exit status 90 (1m16.476794346s)

                                                
                                                
-- stdout --
	* [nospam-719000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "nospam-719000" primary control-plane node in "nospam-719000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 18:47:38 nospam-719000 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.509859425Z" level=info msg="Starting up"
	Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.510821957Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.512100002Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.530129643Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545413023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545435188Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545472273Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545483343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545569074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545604393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545733426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545769101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545782141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545789205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545848287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.546003224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548119030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548162074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548266788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548302441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548374869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548465725Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551050865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551106871Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551121440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551132827Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551142576Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551268897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551499428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551626193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551661769Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551674102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551684111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551692697Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551701415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551717378Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551729910Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551739083Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551747842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551756604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551769710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551784654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551798175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551808619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551817031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551825249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551833018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551841294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551849468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551859083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551867332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551875197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551888723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551902268Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551916750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551924876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551934659Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551990207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552028700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552038964Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552047845Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552054633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552138614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552200515Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552395712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552513883Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552570327Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552583396Z" level=info msg="containerd successfully booted in 0.023131s"
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.536059481Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.543256967Z" level=info msg="Loading containers: start."
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.628397761Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.713935663Z" level=info msg="Loading containers: done."
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.722770260Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.722887774Z" level=info msg="Daemon has completed initialization"
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.751909630Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 18:47:39 nospam-719000 systemd[1]: Started Docker Application Container Engine.
	Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.754348187Z" level=info msg="API listen on [::]:2376"
	Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.774003578Z" level=info msg="Processing signal 'terminated'"
	Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.774941972Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 18:47:40 nospam-719000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775035294Z" level=info msg="Daemon shutdown complete"
	Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775077667Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775090704Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 18:47:41 nospam-719000 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 18:47:41 nospam-719000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 18:47:41 nospam-719000 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 18:47:41 nospam-719000 dockerd[913]: time="2024-08-18T18:47:41.809403235Z" level=info msg="Starting up"
	Aug 18 18:48:41 nospam-719000 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 18:48:41 nospam-719000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-amd64 start -p nospam-719000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 --driver=hyperkit " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job for docker.service failed because the control process exited with error code."
error_spam_test.go:96: unexpected stderr: "See \"systemctl status docker.service\" and \"journalctl -xeu docker.service\" for details."
error_spam_test.go:96: unexpected stderr: "sudo journalctl --no-pager -u docker:"
error_spam_test.go:96: unexpected stderr: "-- stdout --"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:38.509859425Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:38.510821957Z\" level=info msg=\"containerd not running, starting managed containerd\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:38.512100002Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.530129643Z\" level=info msg=\"starting containerd\" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545413023Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545435188Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545472273Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545483343Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545569074Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545604393Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545733426Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545769101Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545782141Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" error=\"devmapper not configured: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545789205Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.545848287Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.546003224Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548119030Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548162074Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548266788Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548302441Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548374869Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.548465725Z\" level=info msg=\"metadata content store policy set\" policy=shared"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551050865Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551106871Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551121440Z\" level=info msg=\"loading plugin \\\"io.containerd.lease.v1.manager\\\"...\" type=io.containerd.lease.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551132827Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551142576Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.runtime.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551268897Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551499428Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551626193Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.runtime.v2"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551661769Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551674102Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551684111Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551692697Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551701415Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551717378Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551729910Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551739083Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551747842Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551756604Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-service\\\"...\" type=io.containerd.service.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551769710Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551784654Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551798175Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551808619Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551817031Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551825249Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551833018Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551841294Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551849468Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551859083Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551867332Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551875197Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551888723Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551902268Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551916750Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551924876Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551934659Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.551990207Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552028700Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.tracing.processor.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552038964Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552047845Z\" level=info msg=\"skip loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" error=\"skip plugin: tracing endpoint not configured\" type=io.containerd.internal.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552054633Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552138614Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552200515Z\" level=info msg=\"NRI interface is disabled by configuration.\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552395712Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552513883Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552570327Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:38 nospam-719000 dockerd[520]: time=\"2024-08-18T18:47:38.552583396Z\" level=info msg=\"containerd successfully booted in 0.023131s\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.536059481Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.543256967Z\" level=info msg=\"Loading containers: start.\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.628397761Z\" level=warning msg=\"ip6tables is enabled, but cannot set up ip6tables chains\" error=\"failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\\nPerhaps ip6tables or your kernel needs to be upgraded.\\n (exit status 3)\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.713935663Z\" level=info msg=\"Loading containers: done.\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.722770260Z\" level=info msg=\"Docker daemon\" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.722887774Z\" level=info msg=\"Daemon has completed initialization\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.751909630Z\" level=info msg=\"API listen on /var/run/docker.sock\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 systemd[1]: Started Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:39 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:39.754348187Z\" level=info msg=\"API listen on [::]:2376\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:40.774003578Z\" level=info msg=\"Processing signal 'terminated'\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:40.774941972Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"<nil>\" module=libcontainerd namespace=moby"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 systemd[1]: Stopping Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:40.775035294Z\" level=info msg=\"Daemon shutdown complete\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:40.775077667Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:40 nospam-719000 dockerd[513]: time=\"2024-08-18T18:47:40.775090704Z\" level=info msg=\"stopping healthcheck following graceful shutdown\" module=libcontainerd"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:41 nospam-719000 systemd[1]: docker.service: Deactivated successfully."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:41 nospam-719000 systemd[1]: Stopped Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:41 nospam-719000 systemd[1]: Starting Docker Application Container Engine..."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:47:41 nospam-719000 dockerd[913]: time=\"2024-08-18T18:47:41.809403235Z\" level=info msg=\"Starting up\""
error_spam_test.go:96: unexpected stderr: "Aug 18 18:48:41 nospam-719000 dockerd[913]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE"
error_spam_test.go:96: unexpected stderr: "Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Failed with result 'exit-code'."
error_spam_test.go:96: unexpected stderr: "Aug 18 18:48:41 nospam-719000 systemd[1]: Failed to start Docker Application Container Engine."
error_spam_test.go:96: unexpected stderr: "-- /stdout --"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-719000] minikube v1.33.1 on Darwin 14.6.1
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on user configuration
* Starting "nospam-719000" primary control-plane node in "nospam-719000" cluster
* Creating hyperkit VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Aug 18 18:47:38 nospam-719000 systemd[1]: Starting Docker Application Container Engine...
Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.509859425Z" level=info msg="Starting up"
Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.510821957Z" level=info msg="containerd not running, starting managed containerd"
Aug 18 18:47:38 nospam-719000 dockerd[513]: time="2024-08-18T18:47:38.512100002Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=520
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.530129643Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545413023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545435188Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545472273Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545483343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545569074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545604393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545733426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545769101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545782141Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545789205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.545848287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.546003224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548119030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548162074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548266788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548302441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548374869Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.548465725Z" level=info msg="metadata content store policy set" policy=shared
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551050865Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551106871Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551121440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551132827Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551142576Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551268897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551499428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551626193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551661769Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551674102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551684111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551692697Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551701415Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551717378Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551729910Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551739083Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551747842Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551756604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551769710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551784654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551798175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551808619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551817031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551825249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551833018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551841294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551849468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551859083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551867332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551875197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551888723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551902268Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551916750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551924876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551934659Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.551990207Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552028700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552038964Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552047845Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552054633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552138614Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552200515Z" level=info msg="NRI interface is disabled by configuration."
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552395712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552513883Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552570327Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Aug 18 18:47:38 nospam-719000 dockerd[520]: time="2024-08-18T18:47:38.552583396Z" level=info msg="containerd successfully booted in 0.023131s"
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.536059481Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.543256967Z" level=info msg="Loading containers: start."
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.628397761Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.713935663Z" level=info msg="Loading containers: done."
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.722770260Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.722887774Z" level=info msg="Daemon has completed initialization"
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.751909630Z" level=info msg="API listen on /var/run/docker.sock"
Aug 18 18:47:39 nospam-719000 systemd[1]: Started Docker Application Container Engine.
Aug 18 18:47:39 nospam-719000 dockerd[513]: time="2024-08-18T18:47:39.754348187Z" level=info msg="API listen on [::]:2376"
Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.774003578Z" level=info msg="Processing signal 'terminated'"
Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.774941972Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 18 18:47:40 nospam-719000 systemd[1]: Stopping Docker Application Container Engine...
Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775035294Z" level=info msg="Daemon shutdown complete"
Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775077667Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 18 18:47:40 nospam-719000 dockerd[513]: time="2024-08-18T18:47:40.775090704Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Aug 18 18:47:41 nospam-719000 systemd[1]: docker.service: Deactivated successfully.
Aug 18 18:47:41 nospam-719000 systemd[1]: Stopped Docker Application Container Engine.
Aug 18 18:47:41 nospam-719000 systemd[1]: Starting Docker Application Container Engine...
Aug 18 18:47:41 nospam-719000 dockerd[913]: time="2024-08-18T18:47:41.809403235Z" level=info msg="Starting up"
Aug 18 18:48:41 nospam-719000 dockerd[913]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 18 18:48:41 nospam-719000 systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 18 18:48:41 nospam-719000 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (76.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-373000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-373000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-373000 -v=7 --alsologtostderr: (27.144019376s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-373000 --wait=true -v=7 --alsologtostderr
E0818 12:05:21.458225    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:06:48.076147    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:07:37.584612    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:08:05.295800    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:08:11.154556    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-373000 --wait=true -v=7 --alsologtostderr: exit status 90 (3m47.356847111s)

                                                
                                                
-- stdout --
	* [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-373000-m03" control-plane node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-373000-m04" worker node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:04:31.983272    3824 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:04:31.983454    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983459    3824 out.go:358] Setting ErrFile to fd 2...
	I0818 12:04:31.983463    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983623    3824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:04:31.985167    3824 out.go:352] Setting JSON to false
	I0818 12:04:32.009018    3824 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2042,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:04:32.009111    3824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:04:32.030819    3824 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:04:32.074529    3824 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:04:32.074586    3824 notify.go:220] Checking for updates...
	I0818 12:04:32.118375    3824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:32.139430    3824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:04:32.160729    3824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:04:32.182618    3824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:04:32.204484    3824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:04:32.226364    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:32.226552    3824 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:04:32.227242    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.227322    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.236867    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51772
	I0818 12:04:32.237225    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.237659    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.237676    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.237931    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.238060    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.267813    3824 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:04:32.289474    3824 start.go:297] selected driver: hyperkit
	I0818 12:04:32.289504    3824 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.289713    3824 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:04:32.289908    3824 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.290109    3824 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:04:32.300191    3824 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:04:32.305600    3824 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.305625    3824 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:04:32.309104    3824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:04:32.309145    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:32.309152    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:32.309217    3824 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.309317    3824 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.358744    3824 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:04:32.379125    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:32.379197    3824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:04:32.379221    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:32.379454    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:32.379473    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:32.379655    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.380668    3824 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:32.380793    3824 start.go:364] duration metric: took 98.513µs to acquireMachinesLock for "ha-373000"
	I0818 12:04:32.380830    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:32.380850    3824 fix.go:54] fixHost starting: 
	I0818 12:04:32.381275    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.381305    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.390300    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51774
	I0818 12:04:32.390644    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.390984    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.390995    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.391207    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.391330    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.391423    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:04:32.391500    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.391596    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 2975
	I0818 12:04:32.392493    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.392518    3824 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:04:32.392535    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:04:32.392619    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:32.435089    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:04:32.455966    3824 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:04:32.456397    3824 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:04:32.456421    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.458400    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.458413    3824 main.go:141] libmachine: (ha-373000) DBG | pid 2975 is in state "Stopped"
	I0818 12:04:32.458431    3824 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:04:32.458650    3824 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:04:32.582503    3824 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:04:32.582527    3824 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:32.582675    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582701    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582750    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:32.582797    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:32.582809    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:32.584342    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Pid is 3836
	I0818 12:04:32.584802    3824 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:04:32.584828    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.584904    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:04:32.586608    3824 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:04:32.586694    3824 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:32.586716    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:32.586736    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:32.586754    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:04:32.586763    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c395f4}
	I0818 12:04:32.586768    3824 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:04:32.586791    3824 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:04:32.586800    3824 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:04:32.587439    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:32.587606    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.588031    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:32.588043    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.588201    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:32.588339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:32.588463    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588712    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:32.588878    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:32.589128    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:32.589140    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:32.592359    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:32.649659    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:32.650386    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:32.650405    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:32.650422    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:32.650441    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.028577    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:33.028592    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:33.143700    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:33.143730    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:33.143746    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:33.143773    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.144665    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:33.144677    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:38.692844    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:38.692980    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:38.692989    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:38.717966    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:04:43.657661    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:04:43.657675    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657817    3824 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:04:43.657829    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657947    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.658033    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.658131    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658218    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658320    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.658446    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.658583    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.658592    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:04:43.726337    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:04:43.726356    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.726492    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.726602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726701    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726793    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.726914    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.727062    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.727073    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:04:43.791204    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:04:43.791222    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:04:43.791240    3824 buildroot.go:174] setting up certificates
	I0818 12:04:43.791251    3824 provision.go:84] configureAuth start
	I0818 12:04:43.791258    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.791389    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:43.791486    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.791580    3824 provision.go:143] copyHostCerts
	I0818 12:04:43.791612    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791682    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:04:43.791691    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791831    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:04:43.792037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792077    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:04:43.792082    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792161    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:04:43.792314    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792360    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:04:43.792365    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792438    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:04:43.792585    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:04:43.849995    3824 provision.go:177] copyRemoteCerts
	I0818 12:04:43.850046    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:04:43.850064    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.850180    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.850277    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.850383    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.850475    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:43.887087    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:04:43.887163    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:04:43.906588    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:04:43.906643    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:04:43.926387    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:04:43.926447    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:04:43.945959    3824 provision.go:87] duration metric: took 154.69571ms to configureAuth
	I0818 12:04:43.945972    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:04:43.946140    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:43.946153    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:43.946287    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.946379    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.946466    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946557    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946656    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.946772    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.946901    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.946910    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:04:44.005207    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:04:44.005222    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:04:44.005300    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:04:44.005312    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.005446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.005534    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005629    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005730    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.005877    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.006020    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.006065    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:04:44.073819    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:04:44.073841    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.073984    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.074098    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074187    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074268    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.074392    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.074539    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.074553    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:04:45.741799    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:04:45.741813    3824 machine.go:96] duration metric: took 13.154182627s to provisionDockerMachine
	I0818 12:04:45.741824    3824 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:04:45.741833    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:04:45.741844    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.742025    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:04:45.742046    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.742143    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.742239    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.742328    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.742403    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.779742    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:04:45.785976    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:04:45.785994    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:04:45.786100    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:04:45.786286    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:04:45.786293    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:04:45.786507    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:04:45.795153    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:45.825008    3824 start.go:296] duration metric: took 83.165524ms for postStartSetup
	I0818 12:04:45.825032    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.825216    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:04:45.825229    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.825330    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.825446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.825536    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.825609    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.861497    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:04:45.861553    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:04:45.913975    3824 fix.go:56] duration metric: took 13.533549329s for fixHost
	I0818 12:04:45.914000    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.914142    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.914243    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914335    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914429    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.914562    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:45.914716    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:45.914724    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:04:45.972708    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007885.983977698
	
	I0818 12:04:45.972721    3824 fix.go:216] guest clock: 1724007885.983977698
	I0818 12:04:45.972726    3824 fix.go:229] Guest: 2024-08-18 12:04:45.983977698 -0700 PDT Remote: 2024-08-18 12:04:45.913989 -0700 PDT m=+13.967759099 (delta=69.988698ms)
	I0818 12:04:45.972744    3824 fix.go:200] guest clock delta is within tolerance: 69.988698ms
	I0818 12:04:45.972748    3824 start.go:83] releasing machines lock for "ha-373000", held for 13.592366774s
	I0818 12:04:45.972769    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.972898    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:45.973002    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973353    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973448    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973532    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:04:45.973568    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973602    3824 ssh_runner.go:195] Run: cat /version.json
	I0818 12:04:45.973622    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973654    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973709    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973731    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973791    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973819    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973885    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.973899    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973975    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:46.010017    3824 ssh_runner.go:195] Run: systemctl --version
	I0818 12:04:46.068668    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:04:46.073848    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:04:46.073896    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:04:46.088665    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:04:46.088678    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.088793    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.104594    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:04:46.113505    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:04:46.122459    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.122502    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:04:46.131401    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.140195    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:04:46.148984    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.157732    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:04:46.166637    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:04:46.175587    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:04:46.184399    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:04:46.193294    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:04:46.201351    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:04:46.209432    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.307330    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:04:46.326804    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.326886    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:04:46.339615    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.350592    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:04:46.370916    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.381030    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.391260    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:04:46.416547    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.426851    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.442033    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:04:46.444975    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:04:46.453011    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:04:46.466482    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:04:46.579328    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:04:46.679794    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.679875    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:04:46.693907    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.791012    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:04:49.093057    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.302096527s)
	I0818 12:04:49.093136    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:04:49.103320    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:04:49.115838    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.126241    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:04:49.218487    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:04:49.318047    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.424425    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:04:49.438128    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.449061    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.547962    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:04:49.611460    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:04:49.611544    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:04:49.616359    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:04:49.616414    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:04:49.620236    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:04:49.646389    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:04:49.646459    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.664790    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.705551    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:04:49.705601    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:49.706071    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:04:49.710649    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.720358    3824 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:04:49.720454    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:49.720509    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.733920    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.733938    3824 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:04:49.734009    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.747065    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.747084    3824 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:04:49.747099    3824 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:04:49.747179    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:04:49.747253    3824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:04:49.785583    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:49.785600    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:49.785611    3824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:04:49.785627    3824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:04:49.785710    3824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:04:49.785725    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:04:49.785779    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:04:49.798283    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:04:49.798356    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:04:49.798405    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:04:49.807035    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:04:49.807081    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:04:49.814327    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:04:49.827868    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:04:49.841383    3824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:04:49.855255    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:04:49.868811    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:04:49.871686    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.880822    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.979755    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:04:49.993936    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:04:49.993948    3824 certs.go:194] generating shared ca certs ...
	I0818 12:04:49.993960    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:49.994155    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:04:49.994224    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:04:49.994234    3824 certs.go:256] generating profile certs ...
	I0818 12:04:49.994338    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:04:49.994359    3824 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d
	I0818 12:04:49.994377    3824 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0818 12:04:50.091613    3824 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d ...
	I0818 12:04:50.091630    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d: {Name:mkea55c8a03a32b3ce24aa90dfb71f1f97bc2354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092214    3824 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d ...
	I0818 12:04:50.092225    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d: {Name:mkcfe2a6c64cb35ce66e627cea270e19236eac55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092457    3824 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:04:50.092702    3824 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:04:50.092980    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:04:50.092991    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:04:50.093016    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:04:50.093037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:04:50.093056    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:04:50.093084    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:04:50.093110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:04:50.093130    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:04:50.093151    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:04:50.093255    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:04:50.093309    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:04:50.093320    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:04:50.093368    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:04:50.093405    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:04:50.093439    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:04:50.093508    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:50.093540    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.093561    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.093579    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.094042    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:04:50.115280    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:04:50.139151    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:04:50.164514    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:04:50.185623    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:04:50.205278    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:04:50.227215    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:04:50.252699    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:04:50.287877    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:04:50.314703    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:04:50.362716    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:04:50.396868    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:04:50.413037    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:04:50.417460    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:04:50.427101    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430627    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430663    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.436239    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:04:50.445438    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:04:50.454433    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458262    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458306    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.462517    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:04:50.471554    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:04:50.480511    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483892    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483930    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.488142    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:04:50.497129    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:04:50.500599    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:04:50.505066    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:04:50.509424    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:04:50.513887    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:04:50.518263    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:04:50.522558    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:04:50.526858    3824 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:50.526981    3824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:04:50.544620    3824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:04:50.553037    3824 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:04:50.553052    3824 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:04:50.553092    3824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:04:50.561771    3824 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:04:50.562091    3824 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562172    3824 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:04:50.562375    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.562752    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562947    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:04:50.563273    3824 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:04:50.563454    3824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:04:50.571351    3824 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:04:50.571368    3824 kubeadm.go:597] duration metric: took 18.311426ms to restartPrimaryControlPlane
	I0818 12:04:50.571374    3824 kubeadm.go:394] duration metric: took 44.525606ms to StartCluster
	I0818 12:04:50.571381    3824 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.571461    3824 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.571852    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.572070    3824 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:04:50.572083    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:04:50.572098    3824 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:04:50.572212    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.614034    3824 out.go:177] * Enabled addons: 
	I0818 12:04:50.635950    3824 addons.go:510] duration metric: took 63.86135ms for enable addons: enabled=[]
	I0818 12:04:50.635988    3824 start.go:246] waiting for cluster config update ...
	I0818 12:04:50.636000    3824 start.go:255] writing updated cluster config ...
	I0818 12:04:50.657675    3824 out.go:201] 
	I0818 12:04:50.679473    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.679623    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.701920    3824 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:04:50.743977    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:50.744059    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:50.744255    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:50.744273    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:50.744402    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.745331    3824 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:50.745437    3824 start.go:364] duration metric: took 80.166µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:04:50.745464    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:50.745472    3824 fix.go:54] fixHost starting: m02
	I0818 12:04:50.745909    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:50.745945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:50.754990    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51796
	I0818 12:04:50.755371    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:50.755727    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:50.755746    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:50.755953    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:50.756082    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.756178    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:04:50.756271    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.756346    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3777
	I0818 12:04:50.757254    3824 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:04:50.757265    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.757267    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	W0818 12:04:50.757351    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:50.798825    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:04:50.819905    3824 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:04:50.820210    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.820266    3824 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:04:50.822018    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	I0818 12:04:50.822032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3777 is in state "Stopped"
	I0818 12:04:50.822050    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:04:50.822421    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:04:50.852069    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:04:50.852091    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:50.852254    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852282    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852317    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:50.852367    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:50.852388    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:50.854019    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Pid is 3847
	I0818 12:04:50.854499    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:04:50.854512    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.854595    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:04:50.856201    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:04:50.856261    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:50.856275    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:04:50.856297    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:50.856304    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:50.856311    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:04:50.856314    3824 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:04:50.856368    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:04:50.857036    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:04:50.857215    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.857753    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:50.857763    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.857876    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:04:50.857972    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:04:50.858077    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858182    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858287    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:04:50.858439    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:50.858605    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:04:50.858614    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:50.862106    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:50.873418    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:50.874484    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:50.874508    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:50.874528    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:50.874540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.253540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:51.253561    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:51.368118    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:51.368138    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:51.368149    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:51.368159    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.369027    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:51.369038    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:56.941257    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:56.941321    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:56.941358    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:56.965032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:05:01.918754    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:05:01.918770    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918896    3824 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:05:01.918915    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918996    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.919079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.919189    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919273    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919370    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.919490    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.919633    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.919642    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:05:01.981031    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:05:01.981046    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.981170    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.981268    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981355    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981446    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.981583    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.981738    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.981752    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:05:02.039473    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:05:02.039493    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:05:02.039504    3824 buildroot.go:174] setting up certificates
	I0818 12:05:02.039510    3824 provision.go:84] configureAuth start
	I0818 12:05:02.039517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:02.039649    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:02.039751    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.039832    3824 provision.go:143] copyHostCerts
	I0818 12:05:02.039860    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.039907    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:05:02.039913    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.040392    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:05:02.041069    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041173    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:05:02.041189    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041355    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:05:02.041829    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041870    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:05:02.041876    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041968    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:05:02.042135    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:05:02.193741    3824 provision.go:177] copyRemoteCerts
	I0818 12:05:02.193788    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:05:02.193804    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.193945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.194042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.194125    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.194199    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:02.226432    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:05:02.226499    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:05:02.246061    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:05:02.246122    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:05:02.265998    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:05:02.266073    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:05:02.285864    3824 provision.go:87] duration metric: took 246.348312ms to configureAuth
	I0818 12:05:02.285879    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:05:02.286050    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:02.286079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:02.286213    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.286301    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.286392    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286472    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286545    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.286668    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.286804    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.286812    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:05:02.339893    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:05:02.339911    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:05:02.340004    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:05:02.340042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.340176    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.340315    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340406    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340501    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.340623    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.340773    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.340820    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:05:02.404178    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:05:02.404194    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.404309    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.404408    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404497    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404595    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.404726    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.404863    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.404877    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:05:04.075470    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:05:04.075484    3824 machine.go:96] duration metric: took 13.218134296s to provisionDockerMachine
	I0818 12:05:04.075493    3824 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:05:04.075501    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:05:04.075511    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.075694    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:05:04.075707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.075834    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.075939    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.076037    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.076115    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.108768    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:05:04.113829    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:05:04.113843    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:05:04.113949    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:05:04.114103    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:05:04.114110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:05:04.114276    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:05:04.124928    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:04.155494    3824 start.go:296] duration metric: took 79.994023ms for postStartSetup
	I0818 12:05:04.155517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.155701    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:05:04.155714    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.155817    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.155914    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.156017    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.156111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.189027    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:05:04.189092    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:05:04.242339    3824 fix.go:56] duration metric: took 13.497284645s for fixHost
	I0818 12:05:04.242364    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.242535    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.242652    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242756    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242854    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.242979    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:04.243122    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:04.243130    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:05:04.296405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007904.452858156
	
	I0818 12:05:04.296418    3824 fix.go:216] guest clock: 1724007904.452858156
	I0818 12:05:04.296424    3824 fix.go:229] Guest: 2024-08-18 12:05:04.452858156 -0700 PDT Remote: 2024-08-18 12:05:04.242354 -0700 PDT m=+32.296694535 (delta=210.504156ms)
	I0818 12:05:04.296434    3824 fix.go:200] guest clock delta is within tolerance: 210.504156ms
	I0818 12:05:04.296438    3824 start.go:83] releasing machines lock for "ha-373000-m02", held for 13.551411847s
	I0818 12:05:04.296457    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.296586    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:04.320113    3824 out.go:177] * Found network options:
	I0818 12:05:04.341094    3824 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:05:04.362987    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.363034    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.363842    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364116    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364240    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:05:04.364290    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:05:04.364348    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.364447    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:05:04.364491    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364510    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.364707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.364754    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.364990    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.365178    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.365196    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.365310    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:05:04.393978    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:05:04.394044    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:05:04.444626    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:05:04.444648    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.444788    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.460942    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:05:04.470007    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:05:04.479404    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:05:04.479474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:05:04.488768    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.497773    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:05:04.506562    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.515469    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:05:04.524688    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:05:04.533764    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:05:04.542630    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:05:04.551641    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:05:04.559747    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:05:04.568155    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:04.661227    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:05:04.678789    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.678856    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:05:04.693121    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.704334    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:05:04.718489    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.731628    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.741778    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:05:04.765854    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.776545    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.792787    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:05:04.795674    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:05:04.802688    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:05:04.816018    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:05:04.913547    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:05:05.026765    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:05:05.026795    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:05:05.040598    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:05.134191    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:05:07.482472    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.348334544s)
	I0818 12:05:07.482540    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:05:07.493839    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:05:07.506964    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.517252    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:05:07.612993    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:05:07.715979    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.829879    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:05:07.843247    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.854199    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.948839    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:05:08.015240    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:05:08.015316    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:05:08.020551    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:05:08.020605    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:05:08.024481    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:05:08.049504    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:05:08.049590    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.068921    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.108445    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:05:08.150167    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:05:08.171157    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:08.171639    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:05:08.176186    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.185534    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:05:08.185713    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.185923    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.185945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.194524    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51818
	I0818 12:05:08.194866    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.195227    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.195244    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.195441    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.195542    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:05:08.195619    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:08.195696    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:05:08.196597    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:05:08.196853    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.196874    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.205321    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51820
	I0818 12:05:08.205651    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.205991    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.206003    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.206254    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.206377    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:05:08.206469    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.6
	I0818 12:05:08.206476    3824 certs.go:194] generating shared ca certs ...
	I0818 12:05:08.206495    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:05:08.206643    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:05:08.206701    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:05:08.206711    3824 certs.go:256] generating profile certs ...
	I0818 12:05:08.206803    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:05:08.206887    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.238ba961
	I0818 12:05:08.206947    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:05:08.206955    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:05:08.206976    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:05:08.206995    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:05:08.207013    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:05:08.207030    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:05:08.207058    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:05:08.207082    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:05:08.207100    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:05:08.207176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:05:08.207217    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:05:08.207233    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:05:08.207270    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:05:08.207305    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:05:08.207341    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:05:08.207407    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:08.207441    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.207462    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.207480    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.207506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:05:08.207592    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:05:08.207678    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:05:08.207761    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:05:08.207840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:05:08.236538    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:05:08.239929    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:05:08.248132    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:05:08.251185    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:05:08.259155    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:05:08.262371    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:05:08.270151    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:05:08.273887    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:05:08.282487    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:05:08.285536    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:05:08.293364    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:05:08.296397    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:05:08.304405    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:05:08.324774    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:05:08.344299    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:05:08.364160    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:05:08.384209    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:05:08.403922    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:05:08.423745    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:05:08.443381    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:05:08.463375    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:05:08.483664    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:05:08.503661    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:05:08.523065    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:05:08.536313    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:05:08.550006    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:05:08.563497    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:05:08.577251    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:05:08.590803    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:05:08.604390    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:05:08.618111    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:05:08.622218    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:05:08.630462    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633848    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633898    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.638082    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:05:08.646091    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:05:08.654220    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657554    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657600    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.661803    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:05:08.669959    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:05:08.678394    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681807    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681847    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.685950    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:05:08.694130    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:05:08.697586    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:05:08.701969    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:05:08.706279    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:05:08.710463    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:05:08.714641    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:05:08.718883    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:05:08.723008    3824 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0818 12:05:08.723074    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:05:08.723091    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:05:08.723120    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:05:08.734860    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:05:08.734897    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:05:08.734943    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:05:08.742519    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:05:08.742560    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:05:08.749712    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:05:08.763219    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:05:08.776984    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:05:08.790534    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:05:08.793387    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.802777    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:08.900049    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:08.914678    3824 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:05:08.914870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.935865    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:05:08.977759    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:09.099141    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:09.111487    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:05:09.111691    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:05:09.111727    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:05:09.111887    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:09.111971    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:09.111976    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:09.111984    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:09.111988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.486764    3824 round_trippers.go:574] Response Status: 200 OK in 8375 milliseconds
	I0818 12:05:17.489585    3824 node_ready.go:49] node "ha-373000-m02" has status "Ready":"True"
	I0818 12:05:17.489601    3824 node_ready.go:38] duration metric: took 8.377957809s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:17.489608    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:17.489646    3824 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:05:17.489661    3824 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:05:17.489699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:17.489704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.489710    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.489715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.530230    3824 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0818 12:05:17.537636    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.537709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:05:17.537723    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.537734    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.537739    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.557447    3824 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0818 12:05:17.557935    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.557944    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.557953    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.557959    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.560556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.560923    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.560933    3824 pod_ready.go:82] duration metric: took 23.281295ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560940    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560984    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:05:17.560989    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.560995    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.560998    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.564580    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.565125    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.565134    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.565139    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.565163    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.569356    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:17.569742    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.569751    3824 pod_ready.go:82] duration metric: took 8.807255ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569758    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569797    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:05:17.569803    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.569809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.569812    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.574840    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.575184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.575192    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.575199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.575202    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.578378    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.578782    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.578792    3824 pod_ready.go:82] duration metric: took 9.028915ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578799    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578838    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:05:17.578843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.578849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.578854    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.580930    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.581338    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:17.581345    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.581351    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.581356    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.583546    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.584029    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.584039    3824 pod_ready.go:82] duration metric: took 5.23429ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584046    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584081    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:05:17.584087    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.584092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.584102    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.586354    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.690238    3824 request.go:632] Waited for 103.365151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690294    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.690299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.690305    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.696245    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.696879    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.696890    3824 pod_ready.go:82] duration metric: took 112.842369ms for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.696903    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.889742    3824 request.go:632] Waited for 192.805887ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889790    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889813    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.889819    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.889825    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.985037    3824 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0818 12:05:18.089860    3824 request.go:632] Waited for 104.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089903    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089927    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.089935    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.089944    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.093863    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.094247    3824 pod_ready.go:98] node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094258    3824 pod_ready.go:82] duration metric: took 397.361513ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	E0818 12:05:18.094264    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094272    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:18.289789    3824 request.go:632] Waited for 195.476866ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289877    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289885    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.289892    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.289896    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.292952    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.489842    3824 request.go:632] Waited for 196.327806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.489923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.489927    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.494638    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:18.690780    3824 request.go:632] Waited for 96.165189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690864    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690871    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.690878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.690883    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.694201    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.890381    3824 request.go:632] Waited for 195.63212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890423    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890429    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.890458    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.890462    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.893043    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.095616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.095638    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.095645    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.095649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.097986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.290759    3824 request.go:632] Waited for 192.087215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290839    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290847    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.290853    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.290860    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.293249    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.594823    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.594840    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.594847    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.594850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.597610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.690481    3824 request.go:632] Waited for 92.316894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690550    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690558    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.690564    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.690568    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.694901    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:20.095867    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.095894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.095905    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.095910    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.099922    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:20.100437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.100445    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.100451    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.100455    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.102106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:20.102474    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:20.595432    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.595453    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.595462    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.595466    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.597863    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:20.598227    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.598234    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.598240    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.598244    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.600061    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.094536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.094563    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.094572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.094577    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.097999    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:21.098519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.098527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.098533    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.098537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.100015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.595468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.595500    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.595514    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.595523    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.601631    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:21.601997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.602004    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.602010    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.602017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.605192    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.094552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.094567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.094574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.094577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.096991    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.097657    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.097665    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.097671    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.097675    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.099680    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:22.595859    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.595888    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.595900    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.595906    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.599261    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.599791    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.599802    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.599810    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.599816    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.602572    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.602966    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:23.096362    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.096389    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.096401    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.096407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.100039    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.100588    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.100596    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.100601    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.100605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.102265    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:23.595179    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.595208    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.595221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.595229    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.598872    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.599421    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.599444    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.599450    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.599452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.601013    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.095296    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.095327    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.095339    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.095344    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.099211    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.099655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.099662    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.099668    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.099671    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.101457    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.595373    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.595395    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.595406    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.595412    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.599194    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.599738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.599748    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.599754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.599758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.601701    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.094729    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.094756    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.094765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.094770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.098009    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.098599    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.098609    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.098617    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.098622    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.100470    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.100761    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:25.594953    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.594981    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.594993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.595002    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.598801    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.599323    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.599331    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.599337    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.599340    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.601145    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.094462    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.094491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.094502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.094508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.098279    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.098847    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.098857    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.098865    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.098869    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.100368    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.596309    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.596379    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.596394    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.596402    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.600128    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.600593    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.600601    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.600607    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.600613    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.602191    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.095574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.095602    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.095613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.095619    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.099557    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.100033    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.100043    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.100050    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.100075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.101821    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.102055    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:27.594913    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.594967    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.594980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.594986    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598307    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.598905    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.598915    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.598923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598937    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.600697    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.095806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.095836    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.095880    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.095892    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.099409    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.099885    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.099894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.099904    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.099909    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.101420    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.594673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.594699    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.594710    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.594716    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.598247    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.599059    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.599066    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.599071    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.599074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.600807    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.095468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.095495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.095506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.095515    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.099742    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:29.100208    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.100215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.100221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.100224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.101920    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.102352    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:29.595041    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.595067    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.595079    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.595086    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.598712    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:29.599364    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.599372    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.599378    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.599384    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.601219    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.094218    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.094243    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.094255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.094262    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.097685    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.098375    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.098384    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.098390    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.098393    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.099950    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.594415    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.594441    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.594453    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.594461    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.597799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.598380    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.598391    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.598399    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.598407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.600100    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.095000    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.095037    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.095081    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.095091    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.098989    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:31.099523    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.099535    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.099543    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.099565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.101114    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.596112    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.596139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.596151    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.596156    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601060    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:31.601464    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.601473    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.601478    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601482    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.608084    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:31.608636    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:32.094503    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.094530    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.094541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.094556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.098239    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.099234    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.099247    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.099255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.099260    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.101138    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.594723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.594751    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.594795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.594802    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.598658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.599491    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.599499    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.599505    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.599508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.601334    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.601711    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.601720    3824 pod_ready.go:82] duration metric: took 14.507895611s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601726    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601761    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:05:32.601766    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.601772    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.601777    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.603708    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.604204    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:32.604212    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.604218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.604222    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.606340    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.606652    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.606661    3824 pod_ready.go:82] duration metric: took 4.92937ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606674    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606703    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:32.606708    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.606713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.606717    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.609503    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.609918    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:32.609926    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.609931    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.609935    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.611839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.108118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.108139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.108150    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.108155    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.111861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:33.112554    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.112561    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.112567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.112570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.114401    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.608245    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.608285    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.608296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.608313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.611023    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:33.611446    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.611454    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.611460    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.611463    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.614112    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.106924    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.106945    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.106955    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.106961    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.110853    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.111241    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.111248    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.111254    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.111257    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.112969    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:34.606890    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.606910    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.606922    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.606934    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.610565    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.611180    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.611189    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.611194    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.611199    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.613556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.613896    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:35.108933    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.108955    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.108967    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.108975    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:35.113665    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.113676    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.113684    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113693    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.115446    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:35.607846    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.607862    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.607871    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.607875    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.610400    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:35.610817    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.610824    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.610830    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.610834    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.613002    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.107806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.107834    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.107845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.107850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.111350    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:36.112008    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.112016    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.112022    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.112026    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.113688    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:36.607575    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.607590    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.607599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.607605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.610466    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.611075    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.611084    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.611092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.611097    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.613213    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.107561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.107587    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.107599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.107607    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.111699    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:37.112198    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.112206    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.112212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.112215    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.114106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:37.114461    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:37.606742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.606757    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.606765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.606769    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.609706    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.610101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.610109    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.610115    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.610119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.612095    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:38.108768    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.108787    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.108799    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.108807    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112123    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:38.112659    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.112670    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.112677    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112683    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.114718    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.606675    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.606689    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.606698    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.606703    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.609037    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.609536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.609544    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.609549    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.609552    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.611709    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.107160    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.107184    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.107196    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.107203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.110902    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:39.111438    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.111449    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.111457    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.111464    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.113475    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.606755    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.606770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.606778    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.606782    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.609155    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.609534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.609542    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.609548    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.609550    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.611533    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:39.611812    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:40.107090    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.107116    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.107127    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.107135    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.110428    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:40.110932    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.110939    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.110945    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.110949    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.112726    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:40.607329    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.607344    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.607352    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.607358    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.609414    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:40.609793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.609800    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.609806    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.609809    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.612006    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.108754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.108777    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.108788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.108794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.112868    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:41.113578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.113585    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.113591    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.113594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.115666    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.607779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.607794    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.607800    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.607803    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.626429    3824 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0818 12:05:41.626909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.626917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.626923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.626928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.638016    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:41.638320    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:42.107843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.107861    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.107874    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.107877    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.125357    3824 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0818 12:05:42.125762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.125770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.125777    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.125794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.137025    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:42.606837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.606853    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.606859    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.606863    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.631392    3824 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0818 12:05:42.632047    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.632055    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.632061    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.632064    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.644074    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:43.106555    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.106567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.106574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.106577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.108847    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.109231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.109240    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.109246    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.109249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.111648    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.607253    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.607270    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.607276    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.607281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.609519    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.610124    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.610132    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.610138    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.610141    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.611865    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.106960    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.106982    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.106991    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.106996    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.110958    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:44.111626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.111634    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.111640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.111643    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.113355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.113674    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:44.606783    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.606795    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.606803    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.606806    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.609512    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:44.609978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.612208    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:45.108541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.108568    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.108585    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.108627    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.112710    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:45.113170    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.113180    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.113188    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.113192    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.115093    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.607694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.607709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.607715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.607718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.609538    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.610190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.610198    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.610204    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.610207    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.612007    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.107742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.107761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.107773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.107781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.111014    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:46.111681    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.111693    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.111701    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.111706    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.113564    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.113901    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:46.607572    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.607584    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.607590    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.607594    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.609579    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.610284    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.610292    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.610297    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.610300    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.611985    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.107288    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.107311    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.107323    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.107328    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.110824    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:47.111541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.111549    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.111554    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.111557    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.113249    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.606697    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.606709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.606715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.606718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.608497    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.608927    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.608936    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.608941    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.608946    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.610440    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.106930    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.106956    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.106968    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.106974    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.110658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:48.111153    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.111167    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.111170    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.112733    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.606534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.606547    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.606553    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.606556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.608472    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.608894    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.608902    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.608908    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.608913    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.611651    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:48.611942    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:49.107605    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.107632    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.107644    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.107650    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.111426    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:49.112028    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.112036    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.112041    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.112043    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.113955    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.607070    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.607085    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.607091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.607095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.608755    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.609118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.609126    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.609132    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.609136    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.610469    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:50.108393    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.108414    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.108426    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.108432    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.111769    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:50.112262    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.112273    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.112280    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.112284    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.114291    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.606734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.606749    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.606755    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.606758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.608846    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.609305    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.609313    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.609318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.609323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.610972    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.107143    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.107164    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.107174    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.107180    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.110468    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:51.111149    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.111182    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.111186    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.112895    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.113303    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:51.607479    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.607491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.607498    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.607502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609461    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.609979    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.611838    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.106475    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.106495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.106506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.106512    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.110099    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:52.110714    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.110722    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.110728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.110732    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.112418    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.606202    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.606215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.606221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.606224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.608174    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.608702    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.608710    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.608716    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.608719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.610185    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.106308    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.106366    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.106379    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.106387    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.109686    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:53.110263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.110271    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.110277    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.110279    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.111992    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.606611    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.606626    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.606632    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.606637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.608462    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.608915    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.608923    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.608928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.608932    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.610639    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.611044    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:54.108224    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:54.108251    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.108263    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.108270    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.112154    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.112694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.112704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.112715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.112728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114303    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.114688    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.114698    3824 pod_ready.go:82] duration metric: took 21.508688862s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114704    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:05:54.114740    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.114745    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114749    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.116392    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.116762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.116769    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.116775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.116779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.118208    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.118583    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.118591    3824 pod_ready.go:82] duration metric: took 3.881464ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118597    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:05:54.118631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.118637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.118639    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.120323    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.120754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.120761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.120767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.120773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.122312    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.122605    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.122614    3824 pod_ready.go:82] duration metric: took 4.012121ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122620    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122653    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:05:54.122658    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.122664    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.122668    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.124297    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.124644    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.124651    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.124657    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.124661    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.126346    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.126734    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.126744    3824 pod_ready.go:82] duration metric: took 4.119352ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126751    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126784    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:05:54.126789    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.126795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.126798    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.128343    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.128709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.128717    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.128722    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.128726    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.130213    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.130501    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.130510    3824 pod_ready.go:82] duration metric: took 3.754726ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.130516    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.308685    3824 request.go:632] Waited for 178.119131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308820    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308835    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.308860    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.308867    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.312453    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.508339    3824 request.go:632] Waited for 195.466477ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508484    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.508506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.508513    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.512283    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.512758    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.512768    3824 pod_ready.go:82] duration metric: took 382.258295ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.512781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.709741    3824 request.go:632] Waited for 196.915457ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709834    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.709854    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.709864    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.713388    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.909468    3824 request.go:632] Waited for 195.387253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.909538    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.909546    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.912861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.913329    3824 pod_ready.go:93] pod "kube-proxy-l7zlx" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.913345    3824 pod_ready.go:82] duration metric: took 400.569828ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.913354    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.108201    3824 request.go:632] Waited for 194.795409ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108307    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.108318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.108327    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.112015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.308912    3824 request.go:632] Waited for 196.31979ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308961    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308969    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.308980    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.308988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.312226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.312828    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.312838    3824 pod_ready.go:82] duration metric: took 399.489444ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.312844    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.509991    3824 request.go:632] Waited for 197.064513ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510043    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510054    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.510064    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.510071    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.512986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:55.708355    3824 request.go:632] Waited for 194.791144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708418    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708434    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.708472    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.708482    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.712929    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:55.713618    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.713628    3824 pod_ready.go:82] duration metric: took 400.791519ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.713635    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.908894    3824 request.go:632] Waited for 195.195069ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.908997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.909005    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.909017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.909027    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.913053    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.108627    3824 request.go:632] Waited for 195.198114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108739    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.108753    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.108764    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.112296    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:56.112725    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:56.112739    3824 pod_ready.go:82] duration metric: took 399.110792ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:56.112748    3824 pod_ready.go:39] duration metric: took 38.624333262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:56.112771    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:05:56.112832    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:05:56.125705    3824 api_server.go:72] duration metric: took 47.212470661s to wait for apiserver process to appear ...
	I0818 12:05:56.125716    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:05:56.125733    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:05:56.128805    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:05:56.128837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:05:56.128843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.128849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.128853    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.129433    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:05:56.129522    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:05:56.129534    3824 api_server.go:131] duration metric: took 3.812968ms to wait for apiserver health ...
	I0818 12:05:56.129542    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:05:56.308455    3824 request.go:632] Waited for 178.848504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308556    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.308568    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.308578    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.314109    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.319517    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:05:56.319538    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319544    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319550    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.319554    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.319557    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.319560    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.319562    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.319565    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.319567    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.319570    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.319574    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.319577    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.319580    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.319583    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.319586    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.319589    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.319592    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.319595    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.319597    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.319600    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.319602    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.319605    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.319607    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.319610    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.319612    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.319615    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.319618    3824 system_pods.go:74] duration metric: took 190.077141ms to wait for pod list to return data ...
	I0818 12:05:56.319624    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:05:56.509526    3824 request.go:632] Waited for 189.85421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509622    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.509641    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.509651    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.513692    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.513814    3824 default_sa.go:45] found service account: "default"
	I0818 12:05:56.513823    3824 default_sa.go:55] duration metric: took 194.201187ms for default service account to be created ...
	I0818 12:05:56.513831    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:05:56.708948    3824 request.go:632] Waited for 195.078219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709031    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709042    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.709053    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.709059    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.714162    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.719538    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:05:56.719553    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719567    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719573    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.719577    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.719580    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.719584    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.719587    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.719589    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.719593    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.719596    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.719598    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.719602    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.719605    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.719608    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.719612    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.719614    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.719617    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.719620    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.719622    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.719625    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.719627    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.719630    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.719633    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.719636    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.719638    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.719641    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.719645    3824 system_pods.go:126] duration metric: took 205.816796ms to wait for k8s-apps to be running ...
	I0818 12:05:56.719654    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:05:56.719707    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:05:56.730176    3824 system_svc.go:56] duration metric: took 10.521627ms WaitForService to wait for kubelet
	I0818 12:05:56.730190    3824 kubeadm.go:582] duration metric: took 47.816976086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:05:56.730206    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:05:56.908283    3824 request.go:632] Waited for 178.034149ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908349    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908360    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.908372    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.908382    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.912474    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.913347    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913361    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913370    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913375    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913378    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913381    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913384    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913387    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913390    3824 node_conditions.go:105] duration metric: took 183.185572ms to run NodePressure ...
	I0818 12:05:56.913403    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:05:56.913420    3824 start.go:255] writing updated cluster config ...
	I0818 12:05:56.936907    3824 out.go:201] 
	I0818 12:05:56.957765    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:56.957829    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:56.978649    3824 out.go:177] * Starting "ha-373000-m03" control-plane node in "ha-373000" cluster
	I0818 12:05:57.020705    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:05:57.020729    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:05:57.020850    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:05:57.020861    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:05:57.020943    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.021483    3824 start.go:360] acquireMachinesLock for ha-373000-m03: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:05:57.021533    3824 start.go:364] duration metric: took 37.26µs to acquireMachinesLock for "ha-373000-m03"
	I0818 12:05:57.021546    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:05:57.021559    3824 fix.go:54] fixHost starting: m03
	I0818 12:05:57.021778    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:57.021797    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:57.030756    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51825
	I0818 12:05:57.031090    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:57.031467    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:57.031484    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:57.031692    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:57.031804    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.031899    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetState
	I0818 12:05:57.031976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.032050    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3309
	I0818 12:05:57.032942    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.032990    3824 fix.go:112] recreateIfNeeded on ha-373000-m03: state=Stopped err=<nil>
	I0818 12:05:57.033010    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	W0818 12:05:57.033095    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:05:57.053856    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m03" ...
	I0818 12:05:57.111714    3824 main.go:141] libmachine: (ha-373000-m03) Calling .Start
	I0818 12:05:57.112061    3824 main.go:141] libmachine: (ha-373000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid
	I0818 12:05:57.112084    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.113448    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.113464    3824 main.go:141] libmachine: (ha-373000-m03) DBG | pid 3309 is in state "Stopped"
	I0818 12:05:57.113496    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid...
	I0818 12:05:57.113651    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Using UUID 94c31089-d24d-4aaf-9127-b4e2c0237480
	I0818 12:05:57.139957    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Generated MAC 72:9e:9b:7f:e6:a8
	I0818 12:05:57.139982    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:05:57.140122    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140163    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140207    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "94c31089-d24d-4aaf-9127-b4e2c0237480", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:05:57.140253    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 94c31089-d24d-4aaf-9127-b4e2c0237480 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:05:57.140273    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:05:57.141664    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Pid is 3862
	I0818 12:05:57.142065    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Attempt 0
	I0818 12:05:57.142080    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.142152    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3862
	I0818 12:05:57.143976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Searching for 72:9e:9b:7f:e6:a8 in /var/db/dhcpd_leases ...
	I0818 12:05:57.144038    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:05:57.144051    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:05:57.144071    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:05:57.144076    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:05:57.144085    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:05:57.144096    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found match: 72:9e:9b:7f:e6:a8
	I0818 12:05:57.144104    3824 main.go:141] libmachine: (ha-373000-m03) DBG | IP: 192.169.0.7
	I0818 12:05:57.144124    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetConfigRaw
	I0818 12:05:57.144820    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:05:57.145002    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.145622    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:05:57.145633    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.145753    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:05:57.145862    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:05:57.145984    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146107    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146206    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:05:57.146322    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:57.146485    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:05:57.146492    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:05:57.149281    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:05:57.157498    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:05:57.158547    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.158570    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.158621    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.158637    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.538516    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:05:57.538532    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:05:57.653356    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.653382    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.653391    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.653407    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.654209    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:05:57.654219    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:06:03.320567    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:06:03.320633    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:06:03.320642    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:06:03.344230    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:06:32.211281    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:06:32.211301    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211449    3824 buildroot.go:166] provisioning hostname "ha-373000-m03"
	I0818 12:06:32.211462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211557    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.211637    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.211710    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211795    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211870    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.212039    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.212206    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.212216    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m03 && echo "ha-373000-m03" | sudo tee /etc/hostname
	I0818 12:06:32.283934    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m03
	
	I0818 12:06:32.283950    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.284081    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.284166    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284244    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284338    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.284470    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.284619    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.284630    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:06:32.349979    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:06:32.349995    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:06:32.350007    3824 buildroot.go:174] setting up certificates
	I0818 12:06:32.350014    3824 provision.go:84] configureAuth start
	I0818 12:06:32.350021    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.350153    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:32.350260    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.350351    3824 provision.go:143] copyHostCerts
	I0818 12:06:32.350379    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350451    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:06:32.350457    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350602    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:06:32.350813    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350855    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:06:32.350861    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350938    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:06:32.351094    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351132    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:06:32.351137    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351223    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:06:32.351372    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m03 san=[127.0.0.1 192.169.0.7 ha-373000-m03 localhost minikube]
	I0818 12:06:32.510769    3824 provision.go:177] copyRemoteCerts
	I0818 12:06:32.510826    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:06:32.510842    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.510985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.511073    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.511136    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.511201    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:32.548268    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:06:32.548346    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:06:32.568706    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:06:32.568782    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:06:32.588790    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:06:32.588863    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:06:32.608953    3824 provision.go:87] duration metric: took 258.934195ms to configureAuth
	I0818 12:06:32.608976    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:06:32.609164    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:32.609181    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:32.609317    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.609407    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.609488    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609563    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609655    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.609780    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.609954    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.609962    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:06:32.671099    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:06:32.671110    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:06:32.671182    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:06:32.671194    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.671327    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.671421    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671505    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671597    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.671725    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.671862    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.671916    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:06:32.743226    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:06:32.743243    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.743369    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.743463    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743553    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743628    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.743742    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.743890    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.743902    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:06:34.364405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:06:34.364421    3824 machine.go:96] duration metric: took 37.219949388s to provisionDockerMachine
	I0818 12:06:34.364429    3824 start.go:293] postStartSetup for "ha-373000-m03" (driver="hyperkit")
	I0818 12:06:34.364441    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:06:34.364454    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.364637    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:06:34.364649    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.364748    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.364846    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.364924    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.364998    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.403257    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:06:34.406448    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:06:34.406462    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:06:34.406565    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:06:34.406753    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:06:34.406760    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:06:34.406965    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:06:34.415199    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:34.434664    3824 start.go:296] duration metric: took 70.221347ms for postStartSetup
	I0818 12:06:34.434685    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.434881    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:06:34.434895    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.434985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.435078    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.435180    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.435266    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.472820    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:06:34.472878    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:06:34.507076    3824 fix.go:56] duration metric: took 37.486680553s for fixHost
	I0818 12:06:34.507105    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.507242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.507350    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507450    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507537    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.507661    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:34.507812    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:34.507820    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:06:34.567906    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007994.725838648
	
	I0818 12:06:34.567925    3824 fix.go:216] guest clock: 1724007994.725838648
	I0818 12:06:34.567930    3824 fix.go:229] Guest: 2024-08-18 12:06:34.725838648 -0700 PDT Remote: 2024-08-18 12:06:34.507094 -0700 PDT m=+122.564244892 (delta=218.744648ms)
	I0818 12:06:34.567943    3824 fix.go:200] guest clock delta is within tolerance: 218.744648ms
	I0818 12:06:34.567946    3824 start.go:83] releasing machines lock for "ha-373000-m03", held for 37.547576549s
	I0818 12:06:34.567963    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.568094    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:34.591371    3824 out.go:177] * Found network options:
	I0818 12:06:34.612327    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0818 12:06:34.633268    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.633293    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.633308    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633777    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633931    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.634012    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:06:34.634042    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	W0818 12:06:34.634075    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.634099    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.634164    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:06:34.634177    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.634183    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634314    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634342    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634432    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634570    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.634589    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634716    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	W0818 12:06:34.668553    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:06:34.668615    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:06:34.719514    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:06:34.719537    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.719641    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:34.736086    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:06:34.744327    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:06:34.752345    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:06:34.752395    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:06:34.760474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.768546    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:06:34.776560    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.784665    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:06:34.792933    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:06:34.801000    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:06:34.809207    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:06:34.817499    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:06:34.824699    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:06:34.832081    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:34.922497    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:06:34.942245    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.942318    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:06:34.961594    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:34.977959    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:06:34.994785    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:35.006539    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.017278    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:06:35.039389    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.050815    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:35.065658    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:06:35.068495    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:06:35.078248    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:06:35.092006    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:06:35.191577    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:06:35.301568    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:06:35.301599    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:06:35.317876    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:35.413915    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:06:37.731416    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.317550809s)
	I0818 12:06:37.731481    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:06:37.741565    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:37.751381    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:06:37.845484    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:06:37.959362    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.068888    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:06:38.082534    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:38.093177    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.188351    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:06:38.252978    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:06:38.253055    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:06:38.257331    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:06:38.257383    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:06:38.260636    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:06:38.285125    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:06:38.285203    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.303582    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.341530    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:06:38.415385    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:06:38.457289    3824 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0818 12:06:38.478242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:38.478613    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:06:38.483129    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:38.492823    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:06:38.493001    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:38.493248    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.493270    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.502531    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51847
	I0818 12:06:38.502982    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.503380    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.503398    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.503603    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.503720    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:06:38.503806    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:06:38.503908    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:06:38.504863    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:06:38.505136    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.505159    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.514076    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51849
	I0818 12:06:38.514417    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.514734    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.514748    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.514977    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.515088    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:06:38.515180    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.7
	I0818 12:06:38.515186    3824 certs.go:194] generating shared ca certs ...
	I0818 12:06:38.515198    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:06:38.515378    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:06:38.515454    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:06:38.515480    3824 certs.go:256] generating profile certs ...
	I0818 12:06:38.515601    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:06:38.515691    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.a796c580
	I0818 12:06:38.515764    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:06:38.515772    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:06:38.515792    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:06:38.515811    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:06:38.515836    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:06:38.515854    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:06:38.515881    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:06:38.515909    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:06:38.515932    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:06:38.516021    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:06:38.516070    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:06:38.516079    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:06:38.516113    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:06:38.516146    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:06:38.516176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:06:38.516242    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:38.516275    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.516297    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.516315    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:06:38.516339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:06:38.516428    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:06:38.516506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:06:38.516591    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:06:38.516676    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:06:38.545460    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:06:38.549008    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:06:38.556894    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:06:38.559945    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:06:38.573932    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:06:38.577300    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:06:38.585295    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:06:38.588495    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:06:38.596413    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:06:38.600019    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:06:38.608205    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:06:38.612275    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:06:38.620061    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:06:38.640273    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:06:38.660114    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:06:38.679901    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:06:38.699819    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:06:38.718980    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:06:38.739258    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:06:38.759233    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:06:38.779159    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:06:38.799128    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:06:38.819459    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:06:38.839485    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:06:38.853931    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:06:38.867660    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:06:38.881016    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:06:38.894734    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:06:38.908655    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:06:38.922215    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:06:38.936152    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:06:38.940292    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:06:38.948670    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.951984    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.952025    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.956301    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:06:38.964945    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:06:38.973410    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976837    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976884    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.980998    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:06:38.989539    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:06:38.998105    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001464    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001509    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.005796    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:06:39.014114    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:06:39.017475    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:06:39.021708    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:06:39.025941    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:06:39.030326    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:06:39.034611    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:06:39.038815    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:06:39.043094    3824 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0818 12:06:39.043154    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:06:39.043171    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:06:39.043216    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:06:39.056006    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:06:39.056050    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:06:39.056106    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:06:39.064688    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:06:39.064746    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:06:39.073725    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:06:39.087281    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:06:39.101247    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:06:39.115342    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:06:39.118445    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:39.127826    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.220452    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.236932    3824 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:06:39.237124    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:39.258433    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:06:39.298999    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.406166    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.422783    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:06:39.423042    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:06:39.423091    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:06:39.423285    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.423367    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.423379    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.423392    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.423403    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.425980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.924516    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.924530    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.924537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.924541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.927146    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.927756    3824 node_ready.go:49] node "ha-373000-m03" has status "Ready":"True"
	I0818 12:06:39.927766    3824 node_ready.go:38] duration metric: took 504.486873ms for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.927772    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:06:39.927816    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:06:39.927826    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.927832    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.927835    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.932950    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:39.939217    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.939280    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:06:39.939289    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.939296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.939299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.942170    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.942704    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.942712    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.942718    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.942722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945194    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.945502    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.945513    3824 pod_ready.go:82] duration metric: took 6.280436ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945527    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945573    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:06:39.945579    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.945596    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.947744    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.948231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.948239    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.948244    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.948249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.949935    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.950306    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.950316    3824 pod_ready.go:82] duration metric: took 4.783283ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950324    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950360    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:06:39.950366    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.950371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.950376    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952196    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.952623    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.952632    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.952637    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954395    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.954700    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.954713    3824 pod_ready.go:82] duration metric: took 4.380752ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954728    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954770    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:06:39.954775    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.954781    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954784    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.956816    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.957264    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:39.957272    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.957278    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.957281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.958954    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.959393    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.959403    3824 pod_ready.go:82] duration metric: took 4.669444ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.959410    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:40.124592    3824 request.go:632] Waited for 165.145751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124629    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124633    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.124639    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.124645    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.127273    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.325487    3824 request.go:632] Waited for 197.85948ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325576    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.325592    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.325603    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.328610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.524678    3824 request.go:632] Waited for 64.314725ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524787    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.524794    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.524800    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.534379    3824 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0818 12:06:40.724687    3824 request.go:632] Waited for 189.641273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724767    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724780    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.724790    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.724795    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.727857    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:40.960310    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.960323    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.960330    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.960334    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.962980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.125004    3824 request.go:632] Waited for 161.489984ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125051    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125059    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.125068    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.125074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.127660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.459552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.459565    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.459572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.459576    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.462348    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.524806    3824 request.go:632] Waited for 61.84167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524878    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524889    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.524897    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.524902    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.527287    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.959574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.959588    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.959594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.959599    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962051    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.962553    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.962563    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.962570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962588    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.964779    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.965088    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:42.461485    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.461498    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.461504    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.461507    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.463825    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.464339    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.464350    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.464358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.464363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.466190    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:42.960283    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.960301    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.960308    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.960313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.962745    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.963399    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.963408    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.963415    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.963420    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.965667    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.460941    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.460961    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.460973    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.460980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.464358    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:43.464865    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.464876    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.464885    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.464903    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.466644    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.960616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.960635    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.960662    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.960670    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.963241    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.963592    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.963599    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.963605    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.963609    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.965295    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.965679    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:44.459655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.459670    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.459678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.459684    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.462938    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.463437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.463446    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.463453    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.463456    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.465455    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:44.960738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.960764    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.960775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.960781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.964513    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.965181    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.965189    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.965195    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.965198    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.967125    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.459544    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.459557    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.459564    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.459567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.461789    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.462287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.462295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.462301    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.462304    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.463842    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.959866    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.959882    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.959891    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.959895    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962334    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.962673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.962680    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.962686    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962691    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.964328    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:46.460263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.460278    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.460302    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.460307    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.462738    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.463273    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.463281    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.463287    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.463290    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.465376    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.465623    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:46.960651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.960728    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.960746    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.960756    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.963413    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.963863    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.963871    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.963877    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.963879    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.965522    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.460546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.460559    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.460565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.460569    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462347    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.462831    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.462839    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.462845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462849    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.465797    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:47.959568    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.959595    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.959606    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.959613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.962968    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:47.963654    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.963665    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.963673    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.963678    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.965348    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.460843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.460865    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.460878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.460888    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.464226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.464806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.464814    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.464820    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.464824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.466523    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.466821    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:48.960506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.960532    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.960544    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.960549    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.964130    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.964586    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.964596    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.964604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.964610    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.966425    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:49.459390    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.459415    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.459427    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.459433    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.463245    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.463769    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.463781    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.463788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.463792    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.466543    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:49.959537    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.959561    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.959571    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.959577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.962607    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.963064    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.963072    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.963077    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.963081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.964839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:50.460746    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.460763    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.460770    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.460773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.463380    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.463793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.463801    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.463807    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.463810    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.466499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.466793    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:50.960528    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.960552    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.960563    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.960569    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.964095    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:50.964754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.964765    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.964773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.964779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.966674    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.459276    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.459296    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.459307    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.459323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.462737    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.463318    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.463325    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.463331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.463342    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.465140    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.960158    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.960178    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.960190    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.960196    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.963615    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.964184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.964194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.964201    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.964208    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.966317    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.459260    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.459275    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.459284    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.459299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.461808    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.462199    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.462207    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.462214    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.462217    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.464015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:52.959295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.959313    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.959324    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.959330    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.963923    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:52.964435    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.964443    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.964449    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.964452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.967830    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:52.968298    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:53.459316    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.459335    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.459343    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.459349    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.464675    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.465233    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.465241    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.465248    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.465251    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.470328    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.960317    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.960343    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.960354    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.960360    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.964420    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:53.965229    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.965236    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.965242    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.965246    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.967660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.459303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.459315    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.459321    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.459324    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.461902    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.462298    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.462305    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.462310    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.462313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.464747    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.960293    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.960319    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.960331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.960339    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.963847    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:54.964473    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.964483    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.964491    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.964497    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.966299    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.459778    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.459804    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.459816    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.459824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.463395    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.464072    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.464083    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.464091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.464095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.465859    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.466228    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:55.959274    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.959295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.959306    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.959313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.962842    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.963214    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.963221    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.963227    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.963230    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.964851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.459680    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.459702    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.459713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.459719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.463508    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.463978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.463986    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.463993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.463996    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.465851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.959108    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.959168    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.959180    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.959188    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.962593    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.963101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.963111    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.963119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.963124    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.964734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.458993    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.459009    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.459033    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.459044    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.461199    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.461630    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.461638    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.461644    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.461647    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.464799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:57.959429    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.959455    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.959466    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.959471    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962366    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.962731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.962739    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.962745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962748    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.964355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.964866    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:58.459677    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.459697    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.459709    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.459714    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.463092    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.463794    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.463802    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.463809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.463811    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.465563    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.959591    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.959612    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.959623    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.959631    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.963002    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.964342    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.964361    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.964371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.964377    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.966371    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.966690    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.966699    3824 pod_ready.go:82] duration metric: took 19.007875373s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966710    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966744    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:06:58.966749    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.966754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.966759    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.968551    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.969049    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.969056    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.969062    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.969065    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.970647    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.971055    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.971063    3824 pod_ready.go:82] duration metric: took 4.347127ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971069    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971100    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:06:58.971105    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.971110    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.971116    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.972830    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.973265    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:58.973273    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.973279    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.973282    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.974809    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.975155    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.975165    3824 pod_ready.go:82] duration metric: took 4.091205ms for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975172    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975209    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:06:58.975214    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.975219    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.975223    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.976734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.977185    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.977194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.977199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.977203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.978595    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.978942    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.978951    3824 pod_ready.go:82] duration metric: took 3.77353ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978957    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978988    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:06:58.978993    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.978999    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.979003    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.980398    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.980845    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.980852    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.980858    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.980861    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.982260    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.982600    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.982608    3824 pod_ready.go:82] duration metric: took 3.645796ms for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.982614    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.160214    3824 request.go:632] Waited for 177.557781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160314    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.160334    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.160341    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.163272    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.360510    3824 request.go:632] Waited for 196.433912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360620    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360630    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.360640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.360649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.364048    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.364505    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.364516    3824 pod_ready.go:82] duration metric: took 381.90816ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.364525    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.559640    3824 request.go:632] Waited for 195.079426ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559705    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.559711    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.559715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.561728    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.760676    3824 request.go:632] Waited for 198.422535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760742    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.760754    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.760761    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.764272    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.764909    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.764919    3824 pod_ready.go:82] duration metric: took 400.401698ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.764926    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.960270    3824 request.go:632] Waited for 195.290695ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960398    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960409    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.960422    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.960432    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.963585    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.161284    3824 request.go:632] Waited for 197.152508ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161348    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161357    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.161364    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.161368    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.163499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.163968    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.163978    3824 pod_ready.go:82] duration metric: took 399.059814ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.163984    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.360550    3824 request.go:632] Waited for 196.524224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360645    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360674    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.360705    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.360715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.364230    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.559710    3824 request.go:632] Waited for 194.892476ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559760    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.559767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.559770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.561706    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:00.562031    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.562041    3824 pod_ready.go:82] duration metric: took 398.063984ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.562048    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.760849    3824 request.go:632] Waited for 198.76912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760881    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760887    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.760893    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.760897    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.763176    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.959686    3824 request.go:632] Waited for 195.875972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959818    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959837    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.959848    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.959855    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.963072    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.963632    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.963645    3824 pod_ready.go:82] duration metric: took 401.603061ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.963654    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.160451    3824 request.go:632] Waited for 196.719541ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160515    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.160526    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.160534    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.163885    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.360939    3824 request.go:632] Waited for 196.415223ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361054    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361063    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.361074    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.361081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.364720    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.365356    3824 pod_ready.go:98] node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365374    3824 pod_ready.go:82] duration metric: took 401.724878ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	E0818 12:07:01.365383    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365389    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.560679    3824 request.go:632] Waited for 195.242196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560732    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.560740    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.560745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.562645    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:01.761089    3824 request.go:632] Waited for 198.042947ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761200    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.761212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.761218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.764398    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.764800    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:01.764826    3824 pod_ready.go:82] duration metric: took 399.443504ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.764834    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.959600    3824 request.go:632] Waited for 194.717673ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959662    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.959672    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.959678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.963127    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.159886    3824 request.go:632] Waited for 196.172195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159958    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159975    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.159988    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.159997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.163322    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.163764    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.163775    3824 pod_ready.go:82] duration metric: took 398.944902ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.163781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.359608    3824 request.go:632] Waited for 195.759022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359664    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359677    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.359715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.359722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.363386    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.560395    3824 request.go:632] Waited for 196.314469ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560474    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560483    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.560491    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.560495    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.563041    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:02.563443    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.563453    3824 pod_ready.go:82] duration metric: took 399.678634ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.563460    3824 pod_ready.go:39] duration metric: took 22.636385926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:07:02.563470    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:07:02.563523    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:07:02.576904    3824 api_server.go:72] duration metric: took 23.340671308s to wait for apiserver process to appear ...
	I0818 12:07:02.576917    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:07:02.576928    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:07:02.581021    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:07:02.581063    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:07:02.581069    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.581075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.581080    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.581650    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:07:02.581745    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:07:02.581754    3824 api_server.go:131] duration metric: took 4.833461ms to wait for apiserver health ...
	I0818 12:07:02.581759    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:07:02.760273    3824 request.go:632] Waited for 178.46854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760344    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760352    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.760358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.760361    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.765147    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:07:02.770514    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:07:02.770527    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:02.770531    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:02.770534    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:02.770537    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:02.770539    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:02.770545    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:02.770549    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:02.770552    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:02.770556    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:02.770558    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:02.770561    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:02.770564    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:02.770566    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:02.770570    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:02.770573    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:02.770577    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:02.770580    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:02.770583    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:02.770585    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:02.770588    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:02.770590    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:02.770593    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:02.770596    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:02.770598    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:02.770601    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:02.770603    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:02.770607    3824 system_pods.go:74] duration metric: took 188.849851ms to wait for pod list to return data ...
	I0818 12:07:02.770613    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:07:02.959522    3824 request.go:632] Waited for 188.86655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959587    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.959598    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.959608    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.963054    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.963263    3824 default_sa.go:45] found service account: "default"
	I0818 12:07:02.963277    3824 default_sa.go:55] duration metric: took 192.665025ms for default service account to be created ...
	I0818 12:07:02.963284    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:07:03.160239    3824 request.go:632] Waited for 196.905811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160320    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160329    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.160341    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.160363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.165404    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:07:03.170694    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:07:03.170706    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:03.170710    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:03.170714    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:03.170717    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:03.170720    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:03.170723    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:03.170725    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:03.170728    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:03.170731    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:03.170733    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:03.170737    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:03.170740    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:03.170743    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:03.170746    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:03.170749    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:03.170752    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:03.170755    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:03.170757    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:03.170760    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:03.170763    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:03.170765    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:03.170769    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:03.170772    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:03.170774    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:03.170777    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:03.170779    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:03.170784    3824 system_pods.go:126] duration metric: took 207.500936ms to wait for k8s-apps to be running ...
	I0818 12:07:03.170789    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:07:03.170841    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:07:03.182482    3824 system_svc.go:56] duration metric: took 11.680891ms WaitForService to wait for kubelet
	I0818 12:07:03.182502    3824 kubeadm.go:582] duration metric: took 23.946290558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:07:03.182518    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:07:03.360851    3824 request.go:632] Waited for 178.265424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360972    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360984    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.360994    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.361004    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.364644    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:03.365979    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365989    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.365996    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365999    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366002    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366005    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366008    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366011    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366014    3824 node_conditions.go:105] duration metric: took 183.498142ms to run NodePressure ...
	I0818 12:07:03.366022    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:07:03.366037    3824 start.go:255] writing updated cluster config ...
	I0818 12:07:03.387453    3824 out.go:201] 
	I0818 12:07:03.408870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:03.408996    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.431363    3824 out.go:177] * Starting "ha-373000-m04" worker node in "ha-373000" cluster
	I0818 12:07:03.473303    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:07:03.473331    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:07:03.473487    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:07:03.473500    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:07:03.473589    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.474432    3824 start.go:360] acquireMachinesLock for ha-373000-m04: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:07:03.474523    3824 start.go:364] duration metric: took 71.686µs to acquireMachinesLock for "ha-373000-m04"
	I0818 12:07:03.474542    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:07:03.474548    3824 fix.go:54] fixHost starting: m04
	I0818 12:07:03.474855    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:07:03.474882    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:07:03.484549    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51853
	I0818 12:07:03.484938    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:07:03.485323    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:07:03.485338    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:07:03.485563    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:07:03.485683    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.485781    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:07:03.485864    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.485969    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3421
	I0818 12:07:03.486880    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid 3421 missing from process table
	I0818 12:07:03.486901    3824 fix.go:112] recreateIfNeeded on ha-373000-m04: state=Stopped err=<nil>
	I0818 12:07:03.486912    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	W0818 12:07:03.486988    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:07:03.508504    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m04" ...
	I0818 12:07:03.582318    3824 main.go:141] libmachine: (ha-373000-m04) Calling .Start
	I0818 12:07:03.582606    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.582712    3824 main.go:141] libmachine: (ha-373000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid
	I0818 12:07:03.582838    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Using UUID 421610dc-2abf-427c-8c2b-c85701e511a2
	I0818 12:07:03.610902    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Generated MAC f2:8c:91:ee:dd:c0
	I0818 12:07:03.610923    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:07:03.611054    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611081    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611126    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "421610dc-2abf-427c-8c2b-c85701e511a2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:07:03.611176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 421610dc-2abf-427c-8c2b-c85701e511a2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:07:03.611189    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:07:03.612626    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Pid is 3877
	I0818 12:07:03.613079    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Attempt 0
	I0818 12:07:03.613097    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.613147    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3877
	I0818 12:07:03.614336    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Searching for f2:8c:91:ee:dd:c0 in /var/db/dhcpd_leases ...
	I0818 12:07:03.614413    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:07:03.614438    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c3979e}
	I0818 12:07:03.614464    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:07:03.614488    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:07:03.614500    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:07:03.614507    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found match: f2:8c:91:ee:dd:c0
	I0818 12:07:03.614515    3824 main.go:141] libmachine: (ha-373000-m04) DBG | IP: 192.169.0.8
	I0818 12:07:03.614531    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetConfigRaw
	I0818 12:07:03.615303    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:03.615492    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.615967    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:07:03.615979    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.616121    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:03.616256    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:03.616397    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616508    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616609    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:03.616727    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:03.616882    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:03.616892    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:07:03.621176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:07:03.629669    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:07:03.630674    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:03.630697    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:03.630709    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:03.630724    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.012965    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:07:04.012987    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:07:04.127720    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:04.127750    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:04.127760    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:04.127778    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.128559    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:07:04.128569    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:07:09.784251    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:07:09.784338    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:07:09.784350    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:07:09.808163    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:07:14.674465    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:07:14.674484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674657    3824 buildroot.go:166] provisioning hostname "ha-373000-m04"
	I0818 12:07:14.674669    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674755    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.674835    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.674920    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675008    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675105    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.675237    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.675389    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.675398    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m04 && echo "ha-373000-m04" | sudo tee /etc/hostname
	I0818 12:07:14.738016    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m04
	
	I0818 12:07:14.738030    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.738166    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.738262    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738354    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738444    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.738575    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.738730    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.738742    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:07:14.800929    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:07:14.800946    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:07:14.800959    3824 buildroot.go:174] setting up certificates
	I0818 12:07:14.800965    3824 provision.go:84] configureAuth start
	I0818 12:07:14.800972    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.801115    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:14.801241    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.801327    3824 provision.go:143] copyHostCerts
	I0818 12:07:14.801357    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801411    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:07:14.801417    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801581    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:07:14.801805    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801837    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:07:14.801842    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801922    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:07:14.802072    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802105    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:07:14.802110    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802180    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:07:14.802329    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m04 san=[127.0.0.1 192.169.0.8 ha-373000-m04 localhost minikube]
	I0818 12:07:15.264268    3824 provision.go:177] copyRemoteCerts
	I0818 12:07:15.264318    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:07:15.264333    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.264514    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.264635    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.264736    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.264840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:15.297241    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:07:15.297314    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:07:15.317451    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:07:15.317516    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:07:15.337321    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:07:15.337400    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:07:15.357216    3824 provision.go:87] duration metric: took 556.258633ms to configureAuth
	I0818 12:07:15.357236    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:07:15.357403    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:15.357417    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:15.357555    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.357641    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.357721    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357806    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.357993    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.358121    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.358132    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:07:15.410788    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:07:15.410801    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:07:15.410873    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:07:15.410885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.411015    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.411098    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411194    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411280    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.411394    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.411541    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.411587    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:07:15.476241    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:07:15.476261    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.476401    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.476490    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476597    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.476838    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.476977    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.476990    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:07:17.071913    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:07:17.071932    3824 machine.go:96] duration metric: took 13.456373306s to provisionDockerMachine
	I0818 12:07:17.071939    3824 start.go:293] postStartSetup for "ha-373000-m04" (driver="hyperkit")
	I0818 12:07:17.071946    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:07:17.071960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.072162    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:07:17.072176    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.072278    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.072367    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.072484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.072586    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.114832    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:07:17.118934    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:07:17.118950    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:07:17.119044    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:07:17.119187    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:07:17.119194    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:07:17.119347    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:07:17.131072    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:07:17.162572    3824 start.go:296] duration metric: took 90.627646ms for postStartSetup
	I0818 12:07:17.162595    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.162766    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:07:17.162780    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.162865    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.162946    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.163031    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.163111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.196597    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:07:17.196659    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:07:17.249652    3824 fix.go:56] duration metric: took 13.775528593s for fixHost
	I0818 12:07:17.249680    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.249818    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.249905    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.249992    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.250086    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.250222    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:17.250363    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:17.250370    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:07:17.303909    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008037.336410727
	
	I0818 12:07:17.303922    3824 fix.go:216] guest clock: 1724008037.336410727
	I0818 12:07:17.303927    3824 fix.go:229] Guest: 2024-08-18 12:07:17.336410727 -0700 PDT Remote: 2024-08-18 12:07:17.249669 -0700 PDT m=+165.308150896 (delta=86.741727ms)
	I0818 12:07:17.303937    3824 fix.go:200] guest clock delta is within tolerance: 86.741727ms
	I0818 12:07:17.303941    3824 start.go:83] releasing machines lock for "ha-373000-m04", held for 13.829839932s
	I0818 12:07:17.303960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.304093    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:17.325783    3824 out.go:177] * Found network options:
	I0818 12:07:17.347322    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0818 12:07:17.368151    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368179    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368192    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.368225    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368728    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368862    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368947    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:07:17.368991    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	W0818 12:07:17.369043    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369069    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369086    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.369158    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:07:17.369174    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369197    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.369352    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369370    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369488    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369507    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369677    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.369697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369814    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	W0818 12:07:17.399808    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:07:17.399874    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:07:17.453508    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:07:17.453527    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.453602    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.468947    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:07:17.477909    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:07:17.486368    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:07:17.486429    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:07:17.495070    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.503908    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:07:17.512255    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.520784    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:07:17.529449    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:07:17.538408    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:07:17.546916    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:07:17.555361    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:07:17.562930    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:07:17.571624    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:17.670212    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:07:17.690532    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.690608    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:07:17.710894    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.721349    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:07:17.738837    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.750943    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.762092    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:07:17.786808    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.798198    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.813512    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:07:17.816407    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:07:17.824320    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:07:17.838071    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:07:17.938835    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:07:18.032593    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:07:18.032616    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:07:18.046682    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:18.149082    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:08:19.094745    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.947540366s)
	I0818 12:08:19.094811    3824 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:08:19.130194    3824 out.go:201] 
	W0818 12:08:19.167950    3824 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:07:15 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.789565294Z" level=info msg="Starting up"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.790497979Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.791060023Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.808949895Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.823962995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824017555Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824063133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824074046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824245628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824285399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824412941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824458745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824472526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824481113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824628618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824862154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826539571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826578591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826700099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826735930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826894261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826943257Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828221494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828269425Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828283877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828294494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828306440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828355173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828863798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828968570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829012385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829087106Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829133358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829171270Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829205360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829239671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829274394Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829307961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829340520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829370638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829531056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829845805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829883191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829896300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829908724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829919786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829928151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829938442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829947500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829958637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829966701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829975548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830016884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830031620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830069034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830080580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830090618Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830119633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830130594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830138753Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830147234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830156530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830165223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830172746Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830327211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830423458Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830503251Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830581618Z" level=info msg="containerd successfully booted in 0.022620s"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.817938076Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.831116800Z" level=info msg="Loading containers: start."
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.929784593Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.991389466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.063078080Z" level=info msg="Loading containers: done."
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074071701Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074231517Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097399297Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097566032Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:07:17 ha-373000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:07:18 ha-373000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.209129651Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210124874Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210325925Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210407877Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210420112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:19 ha-373000-m04 dockerd[1176]: time="2024-08-18T19:07:19.260443864Z" level=info msg="Starting up"
	Aug 18 19:08:19 ha-373000-m04 dockerd[1176]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:07:15 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.789565294Z" level=info msg="Starting up"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.790497979Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.791060023Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.808949895Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.823962995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824017555Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824063133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824074046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824245628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824285399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824412941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824458745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824472526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824481113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824628618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824862154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826539571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826578591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826700099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826735930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826894261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826943257Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828221494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828269425Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828283877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828294494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828306440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828355173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828863798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828968570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829012385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829087106Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829133358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829171270Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829205360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829239671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829274394Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829307961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829340520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829370638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829531056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829845805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829883191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829896300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829908724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829919786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829928151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829938442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829947500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829958637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829966701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829975548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830016884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830031620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830069034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830080580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830090618Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830119633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830130594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830138753Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830147234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830156530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830165223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830172746Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830327211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830423458Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830503251Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830581618Z" level=info msg="containerd successfully booted in 0.022620s"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.817938076Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.831116800Z" level=info msg="Loading containers: start."
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.929784593Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.991389466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.063078080Z" level=info msg="Loading containers: done."
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074071701Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074231517Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097399297Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097566032Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:07:17 ha-373000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:07:18 ha-373000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.209129651Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210124874Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210325925Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210407877Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210420112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:19 ha-373000-m04 dockerd[1176]: time="2024-08-18T19:07:19.260443864Z" level=info msg="Starting up"
	Aug 18 19:08:19 ha-373000-m04 dockerd[1176]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:08:19.168043    3824 out.go:270] * 
	* 
	W0818 12:08:19.169228    3824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:08:19.232626    3824 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-373000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-373000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (3.345723846s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m03_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:04:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:04:31.983272    3824 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:04:31.983454    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983459    3824 out.go:358] Setting ErrFile to fd 2...
	I0818 12:04:31.983463    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983623    3824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:04:31.985167    3824 out.go:352] Setting JSON to false
	I0818 12:04:32.009018    3824 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2042,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:04:32.009111    3824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:04:32.030819    3824 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:04:32.074529    3824 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:04:32.074586    3824 notify.go:220] Checking for updates...
	I0818 12:04:32.118375    3824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:32.139430    3824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:04:32.160729    3824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:04:32.182618    3824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:04:32.204484    3824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:04:32.226364    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:32.226552    3824 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:04:32.227242    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.227322    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.236867    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51772
	I0818 12:04:32.237225    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.237659    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.237676    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.237931    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.238060    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.267813    3824 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:04:32.289474    3824 start.go:297] selected driver: hyperkit
	I0818 12:04:32.289504    3824 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.289713    3824 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:04:32.289908    3824 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.290109    3824 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:04:32.300191    3824 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:04:32.305600    3824 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.305625    3824 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:04:32.309104    3824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:04:32.309145    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:32.309152    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:32.309217    3824 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.309317    3824 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.358744    3824 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:04:32.379125    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:32.379197    3824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:04:32.379221    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:32.379454    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:32.379473    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:32.379655    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.380668    3824 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:32.380793    3824 start.go:364] duration metric: took 98.513µs to acquireMachinesLock for "ha-373000"
	I0818 12:04:32.380830    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:32.380850    3824 fix.go:54] fixHost starting: 
	I0818 12:04:32.381275    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.381305    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.390300    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51774
	I0818 12:04:32.390644    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.390984    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.390995    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.391207    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.391330    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.391423    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:04:32.391500    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.391596    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 2975
	I0818 12:04:32.392493    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.392518    3824 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:04:32.392535    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:04:32.392619    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:32.435089    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:04:32.455966    3824 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:04:32.456397    3824 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:04:32.456421    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.458400    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.458413    3824 main.go:141] libmachine: (ha-373000) DBG | pid 2975 is in state "Stopped"
	I0818 12:04:32.458431    3824 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:04:32.458650    3824 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:04:32.582503    3824 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:04:32.582527    3824 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:32.582675    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582701    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582750    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:32.582797    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:32.582809    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:32.584342    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Pid is 3836
	I0818 12:04:32.584802    3824 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:04:32.584828    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.584904    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:04:32.586608    3824 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:04:32.586694    3824 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:32.586716    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:32.586736    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:32.586754    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:04:32.586763    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c395f4}
	I0818 12:04:32.586768    3824 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:04:32.586791    3824 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:04:32.586800    3824 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:04:32.587439    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:32.587606    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.588031    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:32.588043    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.588201    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:32.588339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:32.588463    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588712    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:32.588878    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:32.589128    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:32.589140    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:32.592359    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:32.649659    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:32.650386    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:32.650405    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:32.650422    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:32.650441    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.028577    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:33.028592    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:33.143700    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:33.143730    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:33.143746    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:33.143773    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.144665    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:33.144677    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:38.692844    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:38.692980    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:38.692989    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:38.717966    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:04:43.657661    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:04:43.657675    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657817    3824 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:04:43.657829    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657947    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.658033    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.658131    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658218    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658320    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.658446    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.658583    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.658592    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:04:43.726337    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:04:43.726356    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.726492    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.726602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726701    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726793    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.726914    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.727062    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.727073    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:04:43.791204    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:04:43.791222    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:04:43.791240    3824 buildroot.go:174] setting up certificates
	I0818 12:04:43.791251    3824 provision.go:84] configureAuth start
	I0818 12:04:43.791258    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.791389    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:43.791486    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.791580    3824 provision.go:143] copyHostCerts
	I0818 12:04:43.791612    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791682    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:04:43.791691    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791831    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:04:43.792037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792077    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:04:43.792082    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792161    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:04:43.792314    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792360    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:04:43.792365    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792438    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:04:43.792585    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:04:43.849995    3824 provision.go:177] copyRemoteCerts
	I0818 12:04:43.850046    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:04:43.850064    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.850180    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.850277    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.850383    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.850475    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:43.887087    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:04:43.887163    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:04:43.906588    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:04:43.906643    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:04:43.926387    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:04:43.926447    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:04:43.945959    3824 provision.go:87] duration metric: took 154.69571ms to configureAuth
	I0818 12:04:43.945972    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:04:43.946140    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:43.946153    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:43.946287    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.946379    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.946466    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946557    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946656    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.946772    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.946901    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.946910    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:04:44.005207    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:04:44.005222    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:04:44.005300    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:04:44.005312    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.005446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.005534    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005629    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005730    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.005877    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.006020    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.006065    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:04:44.073819    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:04:44.073841    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.073984    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.074098    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074187    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074268    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.074392    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.074539    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.074553    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:04:45.741799    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:04:45.741813    3824 machine.go:96] duration metric: took 13.154182627s to provisionDockerMachine
	I0818 12:04:45.741824    3824 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:04:45.741833    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:04:45.741844    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.742025    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:04:45.742046    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.742143    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.742239    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.742328    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.742403    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.779742    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:04:45.785976    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:04:45.785994    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:04:45.786100    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:04:45.786286    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:04:45.786293    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:04:45.786507    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:04:45.795153    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:45.825008    3824 start.go:296] duration metric: took 83.165524ms for postStartSetup
	I0818 12:04:45.825032    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.825216    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:04:45.825229    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.825330    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.825446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.825536    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.825609    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.861497    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:04:45.861553    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:04:45.913975    3824 fix.go:56] duration metric: took 13.533549329s for fixHost
	I0818 12:04:45.914000    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.914142    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.914243    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914335    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914429    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.914562    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:45.914716    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:45.914724    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:04:45.972708    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007885.983977698
	
	I0818 12:04:45.972721    3824 fix.go:216] guest clock: 1724007885.983977698
	I0818 12:04:45.972726    3824 fix.go:229] Guest: 2024-08-18 12:04:45.983977698 -0700 PDT Remote: 2024-08-18 12:04:45.913989 -0700 PDT m=+13.967759099 (delta=69.988698ms)
	I0818 12:04:45.972744    3824 fix.go:200] guest clock delta is within tolerance: 69.988698ms
	I0818 12:04:45.972748    3824 start.go:83] releasing machines lock for "ha-373000", held for 13.592366774s
	I0818 12:04:45.972769    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.972898    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:45.973002    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973353    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973448    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973532    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:04:45.973568    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973602    3824 ssh_runner.go:195] Run: cat /version.json
	I0818 12:04:45.973622    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973654    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973709    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973731    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973791    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973819    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973885    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.973899    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973975    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:46.010017    3824 ssh_runner.go:195] Run: systemctl --version
	I0818 12:04:46.068668    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:04:46.073848    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:04:46.073896    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:04:46.088665    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:04:46.088678    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.088793    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.104594    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:04:46.113505    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:04:46.122459    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.122502    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:04:46.131401    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.140195    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:04:46.148984    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.157732    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:04:46.166637    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:04:46.175587    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:04:46.184399    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:04:46.193294    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:04:46.201351    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:04:46.209432    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.307330    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:04:46.326804    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.326886    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:04:46.339615    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.350592    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:04:46.370916    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.381030    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.391260    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:04:46.416547    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.426851    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.442033    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:04:46.444975    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:04:46.453011    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:04:46.466482    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:04:46.579328    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:04:46.679794    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.679875    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:04:46.693907    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.791012    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:04:49.093057    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.302096527s)
	I0818 12:04:49.093136    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:04:49.103320    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:04:49.115838    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.126241    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:04:49.218487    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:04:49.318047    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.424425    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:04:49.438128    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.449061    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.547962    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:04:49.611460    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:04:49.611544    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:04:49.616359    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:04:49.616414    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:04:49.620236    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:04:49.646389    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:04:49.646459    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.664790    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.705551    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:04:49.705601    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:49.706071    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:04:49.710649    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.720358    3824 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:04:49.720454    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:49.720509    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.733920    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.733938    3824 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:04:49.734009    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.747065    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.747084    3824 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:04:49.747099    3824 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:04:49.747179    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:04:49.747253    3824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:04:49.785583    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:49.785600    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:49.785611    3824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:04:49.785627    3824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:04:49.785710    3824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:04:49.785725    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:04:49.785779    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:04:49.798283    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:04:49.798356    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:04:49.798405    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:04:49.807035    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:04:49.807081    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:04:49.814327    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:04:49.827868    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:04:49.841383    3824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:04:49.855255    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:04:49.868811    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:04:49.871686    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.880822    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.979755    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:04:49.993936    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:04:49.993948    3824 certs.go:194] generating shared ca certs ...
	I0818 12:04:49.993960    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:49.994155    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:04:49.994224    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:04:49.994234    3824 certs.go:256] generating profile certs ...
	I0818 12:04:49.994338    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:04:49.994359    3824 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d
	I0818 12:04:49.994377    3824 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0818 12:04:50.091613    3824 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d ...
	I0818 12:04:50.091630    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d: {Name:mkea55c8a03a32b3ce24aa90dfb71f1f97bc2354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092214    3824 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d ...
	I0818 12:04:50.092225    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d: {Name:mkcfe2a6c64cb35ce66e627cea270e19236eac55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092457    3824 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:04:50.092702    3824 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:04:50.092980    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:04:50.092991    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:04:50.093016    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:04:50.093037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:04:50.093056    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:04:50.093084    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:04:50.093110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:04:50.093130    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:04:50.093151    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:04:50.093255    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:04:50.093309    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:04:50.093320    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:04:50.093368    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:04:50.093405    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:04:50.093439    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:04:50.093508    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:50.093540    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.093561    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.093579    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.094042    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:04:50.115280    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:04:50.139151    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:04:50.164514    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:04:50.185623    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:04:50.205278    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:04:50.227215    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:04:50.252699    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:04:50.287877    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:04:50.314703    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:04:50.362716    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:04:50.396868    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:04:50.413037    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:04:50.417460    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:04:50.427101    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430627    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430663    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.436239    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:04:50.445438    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:04:50.454433    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458262    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458306    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.462517    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:04:50.471554    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:04:50.480511    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483892    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483930    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.488142    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:04:50.497129    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:04:50.500599    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:04:50.505066    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:04:50.509424    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:04:50.513887    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:04:50.518263    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:04:50.522558    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:04:50.526858    3824 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:50.526981    3824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:04:50.544620    3824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:04:50.553037    3824 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:04:50.553052    3824 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:04:50.553092    3824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:04:50.561771    3824 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:04:50.562091    3824 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562172    3824 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:04:50.562375    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.562752    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562947    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:04:50.563273    3824 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:04:50.563454    3824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:04:50.571351    3824 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:04:50.571368    3824 kubeadm.go:597] duration metric: took 18.311426ms to restartPrimaryControlPlane
	I0818 12:04:50.571374    3824 kubeadm.go:394] duration metric: took 44.525606ms to StartCluster
	I0818 12:04:50.571381    3824 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.571461    3824 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.571852    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.572070    3824 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:04:50.572083    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:04:50.572098    3824 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:04:50.572212    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.614034    3824 out.go:177] * Enabled addons: 
	I0818 12:04:50.635950    3824 addons.go:510] duration metric: took 63.86135ms for enable addons: enabled=[]
	I0818 12:04:50.635988    3824 start.go:246] waiting for cluster config update ...
	I0818 12:04:50.636000    3824 start.go:255] writing updated cluster config ...
	I0818 12:04:50.657675    3824 out.go:201] 
	I0818 12:04:50.679473    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.679623    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.701920    3824 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:04:50.743977    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:50.744059    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:50.744255    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:50.744273    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:50.744402    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.745331    3824 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:50.745437    3824 start.go:364] duration metric: took 80.166µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:04:50.745464    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:50.745472    3824 fix.go:54] fixHost starting: m02
	I0818 12:04:50.745909    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:50.745945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:50.754990    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51796
	I0818 12:04:50.755371    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:50.755727    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:50.755746    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:50.755953    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:50.756082    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.756178    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:04:50.756271    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.756346    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3777
	I0818 12:04:50.757254    3824 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:04:50.757265    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.757267    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	W0818 12:04:50.757351    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:50.798825    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:04:50.819905    3824 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:04:50.820210    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.820266    3824 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:04:50.822018    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	I0818 12:04:50.822032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3777 is in state "Stopped"
	I0818 12:04:50.822050    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:04:50.822421    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:04:50.852069    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:04:50.852091    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:50.852254    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852282    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852317    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:50.852367    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:50.852388    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:50.854019    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Pid is 3847
	I0818 12:04:50.854499    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:04:50.854512    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.854595    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:04:50.856201    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:04:50.856261    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:50.856275    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:04:50.856297    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:50.856304    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:50.856311    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:04:50.856314    3824 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:04:50.856368    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:04:50.857036    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:04:50.857215    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.857753    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:50.857763    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.857876    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:04:50.857972    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:04:50.858077    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858182    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858287    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:04:50.858439    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:50.858605    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:04:50.858614    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:50.862106    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:50.873418    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:50.874484    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:50.874508    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:50.874528    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:50.874540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.253540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:51.253561    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:51.368118    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:51.368138    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:51.368149    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:51.368159    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.369027    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:51.369038    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:56.941257    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:56.941321    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:56.941358    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:56.965032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:05:01.918754    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:05:01.918770    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918896    3824 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:05:01.918915    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918996    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.919079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.919189    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919273    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919370    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.919490    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.919633    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.919642    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:05:01.981031    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:05:01.981046    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.981170    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.981268    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981355    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981446    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.981583    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.981738    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.981752    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:05:02.039473    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:05:02.039493    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:05:02.039504    3824 buildroot.go:174] setting up certificates
	I0818 12:05:02.039510    3824 provision.go:84] configureAuth start
	I0818 12:05:02.039517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:02.039649    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:02.039751    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.039832    3824 provision.go:143] copyHostCerts
	I0818 12:05:02.039860    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.039907    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:05:02.039913    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.040392    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:05:02.041069    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041173    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:05:02.041189    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041355    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:05:02.041829    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041870    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:05:02.041876    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041968    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:05:02.042135    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:05:02.193741    3824 provision.go:177] copyRemoteCerts
	I0818 12:05:02.193788    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:05:02.193804    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.193945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.194042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.194125    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.194199    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:02.226432    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:05:02.226499    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:05:02.246061    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:05:02.246122    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:05:02.265998    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:05:02.266073    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:05:02.285864    3824 provision.go:87] duration metric: took 246.348312ms to configureAuth
	I0818 12:05:02.285879    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:05:02.286050    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:02.286079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:02.286213    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.286301    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.286392    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286472    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286545    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.286668    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.286804    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.286812    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:05:02.339893    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:05:02.339911    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:05:02.340004    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:05:02.340042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.340176    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.340315    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340406    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340501    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.340623    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.340773    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.340820    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:05:02.404178    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:05:02.404194    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.404309    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.404408    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404497    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404595    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.404726    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.404863    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.404877    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:05:04.075470    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:05:04.075484    3824 machine.go:96] duration metric: took 13.218134296s to provisionDockerMachine
	I0818 12:05:04.075493    3824 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:05:04.075501    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:05:04.075511    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.075694    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:05:04.075707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.075834    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.075939    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.076037    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.076115    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.108768    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:05:04.113829    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:05:04.113843    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:05:04.113949    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:05:04.114103    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:05:04.114110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:05:04.114276    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:05:04.124928    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:04.155494    3824 start.go:296] duration metric: took 79.994023ms for postStartSetup
	I0818 12:05:04.155517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.155701    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:05:04.155714    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.155817    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.155914    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.156017    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.156111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.189027    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:05:04.189092    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:05:04.242339    3824 fix.go:56] duration metric: took 13.497284645s for fixHost
	I0818 12:05:04.242364    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.242535    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.242652    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242756    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242854    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.242979    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:04.243122    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:04.243130    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:05:04.296405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007904.452858156
	
	I0818 12:05:04.296418    3824 fix.go:216] guest clock: 1724007904.452858156
	I0818 12:05:04.296424    3824 fix.go:229] Guest: 2024-08-18 12:05:04.452858156 -0700 PDT Remote: 2024-08-18 12:05:04.242354 -0700 PDT m=+32.296694535 (delta=210.504156ms)
	I0818 12:05:04.296434    3824 fix.go:200] guest clock delta is within tolerance: 210.504156ms
	I0818 12:05:04.296438    3824 start.go:83] releasing machines lock for "ha-373000-m02", held for 13.551411847s
	I0818 12:05:04.296457    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.296586    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:04.320113    3824 out.go:177] * Found network options:
	I0818 12:05:04.341094    3824 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:05:04.362987    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.363034    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.363842    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364116    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364240    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:05:04.364290    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:05:04.364348    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.364447    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:05:04.364491    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364510    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.364707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.364754    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.364990    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.365178    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.365196    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.365310    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:05:04.393978    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:05:04.394044    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:05:04.444626    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:05:04.444648    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.444788    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.460942    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:05:04.470007    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:05:04.479404    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:05:04.479474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:05:04.488768    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.497773    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:05:04.506562    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.515469    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:05:04.524688    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:05:04.533764    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:05:04.542630    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:05:04.551641    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:05:04.559747    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:05:04.568155    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:04.661227    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:05:04.678789    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.678856    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:05:04.693121    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.704334    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:05:04.718489    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.731628    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.741778    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:05:04.765854    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.776545    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.792787    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:05:04.795674    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:05:04.802688    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:05:04.816018    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:05:04.913547    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:05:05.026765    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:05:05.026795    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:05:05.040598    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:05.134191    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:05:07.482472    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.348334544s)
	I0818 12:05:07.482540    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:05:07.493839    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:05:07.506964    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.517252    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:05:07.612993    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:05:07.715979    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.829879    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:05:07.843247    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.854199    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.948839    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:05:08.015240    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:05:08.015316    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:05:08.020551    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:05:08.020605    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:05:08.024481    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:05:08.049504    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:05:08.049590    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.068921    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.108445    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:05:08.150167    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:05:08.171157    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:08.171639    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:05:08.176186    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.185534    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:05:08.185713    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.185923    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.185945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.194524    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51818
	I0818 12:05:08.194866    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.195227    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.195244    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.195441    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.195542    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:05:08.195619    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:08.195696    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:05:08.196597    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:05:08.196853    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.196874    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.205321    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51820
	I0818 12:05:08.205651    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.205991    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.206003    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.206254    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.206377    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:05:08.206469    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.6
	I0818 12:05:08.206476    3824 certs.go:194] generating shared ca certs ...
	I0818 12:05:08.206495    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:05:08.206643    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:05:08.206701    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:05:08.206711    3824 certs.go:256] generating profile certs ...
	I0818 12:05:08.206803    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:05:08.206887    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.238ba961
	I0818 12:05:08.206947    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:05:08.206955    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:05:08.206976    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:05:08.206995    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:05:08.207013    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:05:08.207030    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:05:08.207058    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:05:08.207082    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:05:08.207100    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:05:08.207176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:05:08.207217    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:05:08.207233    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:05:08.207270    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:05:08.207305    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:05:08.207341    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:05:08.207407    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:08.207441    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.207462    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.207480    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.207506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:05:08.207592    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:05:08.207678    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:05:08.207761    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:05:08.207840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:05:08.236538    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:05:08.239929    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:05:08.248132    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:05:08.251185    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:05:08.259155    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:05:08.262371    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:05:08.270151    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:05:08.273887    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:05:08.282487    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:05:08.285536    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:05:08.293364    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:05:08.296397    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:05:08.304405    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:05:08.324774    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:05:08.344299    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:05:08.364160    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:05:08.384209    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:05:08.403922    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:05:08.423745    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:05:08.443381    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:05:08.463375    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:05:08.483664    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:05:08.503661    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:05:08.523065    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:05:08.536313    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:05:08.550006    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:05:08.563497    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:05:08.577251    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:05:08.590803    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:05:08.604390    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:05:08.618111    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:05:08.622218    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:05:08.630462    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633848    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633898    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.638082    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:05:08.646091    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:05:08.654220    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657554    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657600    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.661803    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:05:08.669959    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:05:08.678394    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681807    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681847    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.685950    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:05:08.694130    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:05:08.697586    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:05:08.701969    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:05:08.706279    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:05:08.710463    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:05:08.714641    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:05:08.718883    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:05:08.723008    3824 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0818 12:05:08.723074    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:05:08.723091    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:05:08.723120    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:05:08.734860    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:05:08.734897    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:05:08.734943    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:05:08.742519    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:05:08.742560    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:05:08.749712    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:05:08.763219    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:05:08.776984    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:05:08.790534    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:05:08.793387    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.802777    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:08.900049    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:08.914678    3824 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:05:08.914870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.935865    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:05:08.977759    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:09.099141    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:09.111487    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:05:09.111691    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:05:09.111727    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:05:09.111887    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:09.111971    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:09.111976    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:09.111984    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:09.111988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.486764    3824 round_trippers.go:574] Response Status: 200 OK in 8375 milliseconds
	I0818 12:05:17.489585    3824 node_ready.go:49] node "ha-373000-m02" has status "Ready":"True"
	I0818 12:05:17.489601    3824 node_ready.go:38] duration metric: took 8.377957809s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:17.489608    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:17.489646    3824 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:05:17.489661    3824 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:05:17.489699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:17.489704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.489710    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.489715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.530230    3824 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0818 12:05:17.537636    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.537709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:05:17.537723    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.537734    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.537739    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.557447    3824 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0818 12:05:17.557935    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.557944    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.557953    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.557959    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.560556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.560923    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.560933    3824 pod_ready.go:82] duration metric: took 23.281295ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560940    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560984    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:05:17.560989    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.560995    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.560998    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.564580    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.565125    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.565134    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.565139    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.565163    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.569356    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:17.569742    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.569751    3824 pod_ready.go:82] duration metric: took 8.807255ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569758    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569797    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:05:17.569803    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.569809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.569812    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.574840    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.575184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.575192    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.575199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.575202    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.578378    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.578782    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.578792    3824 pod_ready.go:82] duration metric: took 9.028915ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578799    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578838    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:05:17.578843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.578849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.578854    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.580930    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.581338    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:17.581345    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.581351    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.581356    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.583546    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.584029    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.584039    3824 pod_ready.go:82] duration metric: took 5.23429ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584046    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584081    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:05:17.584087    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.584092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.584102    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.586354    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.690238    3824 request.go:632] Waited for 103.365151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690294    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.690299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.690305    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.696245    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.696879    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.696890    3824 pod_ready.go:82] duration metric: took 112.842369ms for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.696903    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.889742    3824 request.go:632] Waited for 192.805887ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889790    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889813    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.889819    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.889825    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.985037    3824 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0818 12:05:18.089860    3824 request.go:632] Waited for 104.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089903    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089927    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.089935    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.089944    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.093863    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.094247    3824 pod_ready.go:98] node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094258    3824 pod_ready.go:82] duration metric: took 397.361513ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	E0818 12:05:18.094264    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094272    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:18.289789    3824 request.go:632] Waited for 195.476866ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289877    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289885    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.289892    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.289896    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.292952    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.489842    3824 request.go:632] Waited for 196.327806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.489923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.489927    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.494638    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:18.690780    3824 request.go:632] Waited for 96.165189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690864    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690871    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.690878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.690883    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.694201    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.890381    3824 request.go:632] Waited for 195.63212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890423    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890429    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.890458    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.890462    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.893043    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.095616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.095638    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.095645    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.095649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.097986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.290759    3824 request.go:632] Waited for 192.087215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290839    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290847    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.290853    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.290860    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.293249    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.594823    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.594840    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.594847    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.594850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.597610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.690481    3824 request.go:632] Waited for 92.316894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690550    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690558    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.690564    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.690568    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.694901    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:20.095867    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.095894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.095905    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.095910    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.099922    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:20.100437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.100445    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.100451    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.100455    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.102106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:20.102474    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:20.595432    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.595453    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.595462    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.595466    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.597863    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:20.598227    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.598234    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.598240    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.598244    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.600061    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.094536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.094563    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.094572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.094577    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.097999    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:21.098519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.098527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.098533    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.098537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.100015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.595468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.595500    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.595514    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.595523    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.601631    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:21.601997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.602004    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.602010    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.602017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.605192    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.094552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.094567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.094574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.094577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.096991    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.097657    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.097665    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.097671    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.097675    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.099680    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:22.595859    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.595888    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.595900    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.595906    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.599261    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.599791    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.599802    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.599810    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.599816    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.602572    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.602966    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:23.096362    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.096389    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.096401    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.096407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.100039    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.100588    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.100596    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.100601    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.100605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.102265    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:23.595179    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.595208    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.595221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.595229    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.598872    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.599421    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.599444    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.599450    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.599452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.601013    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.095296    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.095327    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.095339    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.095344    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.099211    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.099655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.099662    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.099668    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.099671    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.101457    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.595373    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.595395    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.595406    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.595412    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.599194    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.599738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.599748    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.599754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.599758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.601701    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.094729    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.094756    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.094765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.094770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.098009    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.098599    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.098609    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.098617    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.098622    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.100470    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.100761    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:25.594953    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.594981    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.594993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.595002    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.598801    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.599323    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.599331    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.599337    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.599340    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.601145    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.094462    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.094491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.094502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.094508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.098279    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.098847    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.098857    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.098865    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.098869    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.100368    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.596309    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.596379    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.596394    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.596402    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.600128    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.600593    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.600601    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.600607    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.600613    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.602191    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.095574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.095602    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.095613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.095619    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.099557    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.100033    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.100043    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.100050    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.100075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.101821    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.102055    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:27.594913    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.594967    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.594980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.594986    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598307    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.598905    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.598915    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.598923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598937    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.600697    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.095806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.095836    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.095880    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.095892    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.099409    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.099885    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.099894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.099904    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.099909    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.101420    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.594673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.594699    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.594710    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.594716    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.598247    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.599059    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.599066    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.599071    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.599074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.600807    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.095468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.095495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.095506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.095515    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.099742    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:29.100208    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.100215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.100221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.100224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.101920    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.102352    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:29.595041    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.595067    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.595079    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.595086    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.598712    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:29.599364    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.599372    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.599378    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.599384    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.601219    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.094218    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.094243    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.094255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.094262    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.097685    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.098375    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.098384    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.098390    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.098393    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.099950    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.594415    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.594441    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.594453    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.594461    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.597799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.598380    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.598391    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.598399    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.598407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.600100    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.095000    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.095037    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.095081    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.095091    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.098989    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:31.099523    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.099535    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.099543    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.099565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.101114    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.596112    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.596139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.596151    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.596156    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601060    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:31.601464    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.601473    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.601478    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601482    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.608084    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:31.608636    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:32.094503    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.094530    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.094541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.094556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.098239    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.099234    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.099247    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.099255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.099260    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.101138    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.594723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.594751    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.594795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.594802    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.598658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.599491    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.599499    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.599505    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.599508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.601334    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.601711    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.601720    3824 pod_ready.go:82] duration metric: took 14.507895611s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601726    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601761    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:05:32.601766    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.601772    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.601777    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.603708    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.604204    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:32.604212    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.604218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.604222    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.606340    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.606652    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.606661    3824 pod_ready.go:82] duration metric: took 4.92937ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606674    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606703    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:32.606708    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.606713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.606717    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.609503    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.609918    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:32.609926    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.609931    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.609935    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.611839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.108118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.108139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.108150    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.108155    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.111861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:33.112554    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.112561    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.112567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.112570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.114401    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.608245    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.608285    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.608296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.608313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.611023    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:33.611446    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.611454    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.611460    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.611463    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.614112    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.106924    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.106945    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.106955    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.106961    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.110853    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.111241    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.111248    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.111254    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.111257    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.112969    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:34.606890    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.606910    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.606922    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.606934    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.610565    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.611180    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.611189    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.611194    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.611199    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.613556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.613896    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:35.108933    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.108955    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.108967    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.108975    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:35.113665    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.113676    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.113684    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113693    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.115446    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:35.607846    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.607862    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.607871    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.607875    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.610400    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:35.610817    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.610824    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.610830    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.610834    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.613002    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.107806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.107834    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.107845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.107850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.111350    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:36.112008    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.112016    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.112022    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.112026    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.113688    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:36.607575    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.607590    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.607599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.607605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.610466    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.611075    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.611084    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.611092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.611097    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.613213    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.107561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.107587    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.107599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.107607    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.111699    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:37.112198    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.112206    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.112212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.112215    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.114106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:37.114461    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:37.606742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.606757    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.606765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.606769    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.609706    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.610101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.610109    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.610115    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.610119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.612095    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:38.108768    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.108787    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.108799    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.108807    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112123    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:38.112659    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.112670    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.112677    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112683    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.114718    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.606675    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.606689    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.606698    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.606703    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.609037    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.609536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.609544    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.609549    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.609552    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.611709    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.107160    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.107184    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.107196    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.107203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.110902    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:39.111438    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.111449    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.111457    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.111464    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.113475    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.606755    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.606770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.606778    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.606782    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.609155    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.609534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.609542    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.609548    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.609550    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.611533    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:39.611812    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:40.107090    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.107116    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.107127    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.107135    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.110428    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:40.110932    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.110939    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.110945    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.110949    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.112726    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:40.607329    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.607344    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.607352    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.607358    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.609414    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:40.609793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.609800    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.609806    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.609809    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.612006    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.108754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.108777    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.108788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.108794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.112868    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:41.113578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.113585    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.113591    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.113594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.115666    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.607779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.607794    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.607800    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.607803    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.626429    3824 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0818 12:05:41.626909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.626917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.626923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.626928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.638016    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:41.638320    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:42.107843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.107861    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.107874    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.107877    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.125357    3824 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0818 12:05:42.125762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.125770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.125777    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.125794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.137025    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:42.606837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.606853    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.606859    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.606863    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.631392    3824 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0818 12:05:42.632047    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.632055    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.632061    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.632064    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.644074    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:43.106555    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.106567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.106574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.106577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.108847    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.109231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.109240    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.109246    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.109249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.111648    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.607253    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.607270    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.607276    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.607281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.609519    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.610124    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.610132    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.610138    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.610141    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.611865    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.106960    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.106982    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.106991    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.106996    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.110958    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:44.111626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.111634    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.111640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.111643    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.113355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.113674    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:44.606783    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.606795    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.606803    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.606806    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.609512    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:44.609978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.612208    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:45.108541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.108568    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.108585    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.108627    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.112710    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:45.113170    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.113180    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.113188    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.113192    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.115093    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.607694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.607709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.607715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.607718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.609538    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.610190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.610198    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.610204    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.610207    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.612007    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.107742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.107761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.107773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.107781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.111014    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:46.111681    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.111693    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.111701    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.111706    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.113564    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.113901    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:46.607572    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.607584    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.607590    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.607594    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.609579    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.610284    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.610292    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.610297    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.610300    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.611985    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.107288    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.107311    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.107323    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.107328    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.110824    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:47.111541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.111549    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.111554    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.111557    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.113249    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.606697    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.606709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.606715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.606718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.608497    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.608927    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.608936    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.608941    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.608946    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.610440    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.106930    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.106956    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.106968    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.106974    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.110658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:48.111153    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.111167    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.111170    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.112733    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.606534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.606547    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.606553    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.606556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.608472    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.608894    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.608902    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.608908    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.608913    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.611651    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:48.611942    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:49.107605    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.107632    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.107644    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.107650    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.111426    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:49.112028    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.112036    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.112041    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.112043    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.113955    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.607070    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.607085    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.607091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.607095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.608755    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.609118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.609126    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.609132    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.609136    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.610469    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:50.108393    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.108414    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.108426    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.108432    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.111769    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:50.112262    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.112273    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.112280    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.112284    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.114291    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.606734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.606749    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.606755    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.606758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.608846    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.609305    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.609313    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.609318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.609323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.610972    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.107143    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.107164    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.107174    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.107180    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.110468    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:51.111149    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.111182    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.111186    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.112895    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.113303    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:51.607479    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.607491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.607498    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.607502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609461    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.609979    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.611838    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.106475    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.106495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.106506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.106512    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.110099    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:52.110714    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.110722    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.110728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.110732    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.112418    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.606202    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.606215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.606221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.606224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.608174    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.608702    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.608710    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.608716    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.608719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.610185    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.106308    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.106366    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.106379    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.106387    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.109686    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:53.110263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.110271    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.110277    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.110279    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.111992    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.606611    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.606626    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.606632    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.606637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.608462    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.608915    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.608923    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.608928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.608932    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.610639    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.611044    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:54.108224    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:54.108251    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.108263    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.108270    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.112154    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.112694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.112704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.112715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.112728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114303    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.114688    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.114698    3824 pod_ready.go:82] duration metric: took 21.508688862s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114704    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:05:54.114740    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.114745    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114749    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.116392    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.116762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.116769    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.116775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.116779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.118208    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.118583    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.118591    3824 pod_ready.go:82] duration metric: took 3.881464ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118597    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:05:54.118631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.118637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.118639    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.120323    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.120754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.120761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.120767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.120773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.122312    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.122605    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.122614    3824 pod_ready.go:82] duration metric: took 4.012121ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122620    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122653    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:05:54.122658    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.122664    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.122668    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.124297    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.124644    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.124651    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.124657    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.124661    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.126346    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.126734    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.126744    3824 pod_ready.go:82] duration metric: took 4.119352ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126751    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126784    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:05:54.126789    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.126795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.126798    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.128343    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.128709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.128717    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.128722    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.128726    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.130213    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.130501    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.130510    3824 pod_ready.go:82] duration metric: took 3.754726ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.130516    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.308685    3824 request.go:632] Waited for 178.119131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308820    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308835    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.308860    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.308867    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.312453    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.508339    3824 request.go:632] Waited for 195.466477ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508484    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.508506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.508513    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.512283    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.512758    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.512768    3824 pod_ready.go:82] duration metric: took 382.258295ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.512781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.709741    3824 request.go:632] Waited for 196.915457ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709834    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.709854    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.709864    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.713388    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.909468    3824 request.go:632] Waited for 195.387253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.909538    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.909546    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.912861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.913329    3824 pod_ready.go:93] pod "kube-proxy-l7zlx" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.913345    3824 pod_ready.go:82] duration metric: took 400.569828ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.913354    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.108201    3824 request.go:632] Waited for 194.795409ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108307    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.108318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.108327    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.112015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.308912    3824 request.go:632] Waited for 196.31979ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308961    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308969    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.308980    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.308988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.312226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.312828    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.312838    3824 pod_ready.go:82] duration metric: took 399.489444ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.312844    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.509991    3824 request.go:632] Waited for 197.064513ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510043    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510054    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.510064    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.510071    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.512986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:55.708355    3824 request.go:632] Waited for 194.791144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708418    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708434    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.708472    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.708482    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.712929    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:55.713618    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.713628    3824 pod_ready.go:82] duration metric: took 400.791519ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.713635    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.908894    3824 request.go:632] Waited for 195.195069ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.908997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.909005    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.909017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.909027    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.913053    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.108627    3824 request.go:632] Waited for 195.198114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108739    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.108753    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.108764    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.112296    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:56.112725    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:56.112739    3824 pod_ready.go:82] duration metric: took 399.110792ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:56.112748    3824 pod_ready.go:39] duration metric: took 38.624333262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:56.112771    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:05:56.112832    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:05:56.125705    3824 api_server.go:72] duration metric: took 47.212470661s to wait for apiserver process to appear ...
	I0818 12:05:56.125716    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:05:56.125733    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:05:56.128805    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:05:56.128837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:05:56.128843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.128849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.128853    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.129433    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:05:56.129522    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:05:56.129534    3824 api_server.go:131] duration metric: took 3.812968ms to wait for apiserver health ...
	I0818 12:05:56.129542    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:05:56.308455    3824 request.go:632] Waited for 178.848504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308556    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.308568    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.308578    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.314109    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.319517    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:05:56.319538    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319544    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319550    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.319554    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.319557    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.319560    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.319562    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.319565    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.319567    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.319570    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.319574    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.319577    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.319580    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.319583    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.319586    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.319589    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.319592    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.319595    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.319597    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.319600    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.319602    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.319605    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.319607    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.319610    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.319612    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.319615    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.319618    3824 system_pods.go:74] duration metric: took 190.077141ms to wait for pod list to return data ...
	I0818 12:05:56.319624    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:05:56.509526    3824 request.go:632] Waited for 189.85421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509622    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.509641    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.509651    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.513692    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.513814    3824 default_sa.go:45] found service account: "default"
	I0818 12:05:56.513823    3824 default_sa.go:55] duration metric: took 194.201187ms for default service account to be created ...
	I0818 12:05:56.513831    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:05:56.708948    3824 request.go:632] Waited for 195.078219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709031    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709042    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.709053    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.709059    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.714162    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.719538    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:05:56.719553    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719567    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719573    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.719577    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.719580    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.719584    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.719587    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.719589    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.719593    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.719596    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.719598    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.719602    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.719605    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.719608    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.719612    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.719614    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.719617    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.719620    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.719622    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.719625    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.719627    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.719630    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.719633    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.719636    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.719638    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.719641    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.719645    3824 system_pods.go:126] duration metric: took 205.816796ms to wait for k8s-apps to be running ...
	I0818 12:05:56.719654    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:05:56.719707    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:05:56.730176    3824 system_svc.go:56] duration metric: took 10.521627ms WaitForService to wait for kubelet
	I0818 12:05:56.730190    3824 kubeadm.go:582] duration metric: took 47.816976086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:05:56.730206    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:05:56.908283    3824 request.go:632] Waited for 178.034149ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908349    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908360    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.908372    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.908382    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.912474    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.913347    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913361    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913370    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913375    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913378    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913381    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913384    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913387    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913390    3824 node_conditions.go:105] duration metric: took 183.185572ms to run NodePressure ...
	I0818 12:05:56.913403    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:05:56.913420    3824 start.go:255] writing updated cluster config ...
	I0818 12:05:56.936907    3824 out.go:201] 
	I0818 12:05:56.957765    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:56.957829    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:56.978649    3824 out.go:177] * Starting "ha-373000-m03" control-plane node in "ha-373000" cluster
	I0818 12:05:57.020705    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:05:57.020729    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:05:57.020850    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:05:57.020861    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:05:57.020943    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.021483    3824 start.go:360] acquireMachinesLock for ha-373000-m03: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:05:57.021533    3824 start.go:364] duration metric: took 37.26µs to acquireMachinesLock for "ha-373000-m03"
	I0818 12:05:57.021546    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:05:57.021559    3824 fix.go:54] fixHost starting: m03
	I0818 12:05:57.021778    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:57.021797    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:57.030756    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51825
	I0818 12:05:57.031090    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:57.031467    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:57.031484    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:57.031692    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:57.031804    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.031899    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetState
	I0818 12:05:57.031976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.032050    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3309
	I0818 12:05:57.032942    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.032990    3824 fix.go:112] recreateIfNeeded on ha-373000-m03: state=Stopped err=<nil>
	I0818 12:05:57.033010    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	W0818 12:05:57.033095    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:05:57.053856    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m03" ...
	I0818 12:05:57.111714    3824 main.go:141] libmachine: (ha-373000-m03) Calling .Start
	I0818 12:05:57.112061    3824 main.go:141] libmachine: (ha-373000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid
	I0818 12:05:57.112084    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.113448    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.113464    3824 main.go:141] libmachine: (ha-373000-m03) DBG | pid 3309 is in state "Stopped"
	I0818 12:05:57.113496    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid...
	I0818 12:05:57.113651    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Using UUID 94c31089-d24d-4aaf-9127-b4e2c0237480
	I0818 12:05:57.139957    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Generated MAC 72:9e:9b:7f:e6:a8
	I0818 12:05:57.139982    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:05:57.140122    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140163    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140207    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "94c31089-d24d-4aaf-9127-b4e2c0237480", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:05:57.140253    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 94c31089-d24d-4aaf-9127-b4e2c0237480 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:05:57.140273    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:05:57.141664    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Pid is 3862
	I0818 12:05:57.142065    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Attempt 0
	I0818 12:05:57.142080    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.142152    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3862
	I0818 12:05:57.143976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Searching for 72:9e:9b:7f:e6:a8 in /var/db/dhcpd_leases ...
	I0818 12:05:57.144038    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:05:57.144051    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:05:57.144071    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:05:57.144076    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:05:57.144085    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:05:57.144096    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found match: 72:9e:9b:7f:e6:a8
	I0818 12:05:57.144104    3824 main.go:141] libmachine: (ha-373000-m03) DBG | IP: 192.169.0.7
	I0818 12:05:57.144124    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetConfigRaw
	I0818 12:05:57.144820    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:05:57.145002    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.145622    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:05:57.145633    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.145753    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:05:57.145862    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:05:57.145984    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146107    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146206    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:05:57.146322    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:57.146485    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:05:57.146492    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:05:57.149281    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:05:57.157498    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:05:57.158547    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.158570    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.158621    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.158637    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.538516    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:05:57.538532    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:05:57.653356    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.653382    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.653391    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.653407    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.654209    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:05:57.654219    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:06:03.320567    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:06:03.320633    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:06:03.320642    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:06:03.344230    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:06:32.211281    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:06:32.211301    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211449    3824 buildroot.go:166] provisioning hostname "ha-373000-m03"
	I0818 12:06:32.211462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211557    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.211637    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.211710    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211795    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211870    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.212039    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.212206    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.212216    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m03 && echo "ha-373000-m03" | sudo tee /etc/hostname
	I0818 12:06:32.283934    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m03
	
	I0818 12:06:32.283950    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.284081    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.284166    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284244    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284338    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.284470    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.284619    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.284630    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:06:32.349979    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:06:32.349995    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:06:32.350007    3824 buildroot.go:174] setting up certificates
	I0818 12:06:32.350014    3824 provision.go:84] configureAuth start
	I0818 12:06:32.350021    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.350153    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:32.350260    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.350351    3824 provision.go:143] copyHostCerts
	I0818 12:06:32.350379    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350451    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:06:32.350457    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350602    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:06:32.350813    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350855    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:06:32.350861    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350938    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:06:32.351094    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351132    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:06:32.351137    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351223    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:06:32.351372    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m03 san=[127.0.0.1 192.169.0.7 ha-373000-m03 localhost minikube]
	I0818 12:06:32.510769    3824 provision.go:177] copyRemoteCerts
	I0818 12:06:32.510826    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:06:32.510842    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.510985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.511073    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.511136    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.511201    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:32.548268    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:06:32.548346    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:06:32.568706    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:06:32.568782    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:06:32.588790    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:06:32.588863    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:06:32.608953    3824 provision.go:87] duration metric: took 258.934195ms to configureAuth
	I0818 12:06:32.608976    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:06:32.609164    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:32.609181    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:32.609317    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.609407    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.609488    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609563    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609655    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.609780    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.609954    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.609962    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:06:32.671099    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:06:32.671110    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:06:32.671182    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:06:32.671194    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.671327    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.671421    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671505    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671597    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.671725    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.671862    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.671916    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:06:32.743226    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:06:32.743243    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.743369    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.743463    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743553    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743628    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.743742    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.743890    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.743902    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:06:34.364405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:06:34.364421    3824 machine.go:96] duration metric: took 37.219949388s to provisionDockerMachine
	I0818 12:06:34.364429    3824 start.go:293] postStartSetup for "ha-373000-m03" (driver="hyperkit")
	I0818 12:06:34.364441    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:06:34.364454    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.364637    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:06:34.364649    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.364748    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.364846    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.364924    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.364998    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.403257    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:06:34.406448    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:06:34.406462    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:06:34.406565    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:06:34.406753    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:06:34.406760    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:06:34.406965    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:06:34.415199    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:34.434664    3824 start.go:296] duration metric: took 70.221347ms for postStartSetup
	I0818 12:06:34.434685    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.434881    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:06:34.434895    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.434985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.435078    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.435180    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.435266    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.472820    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:06:34.472878    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:06:34.507076    3824 fix.go:56] duration metric: took 37.486680553s for fixHost
	I0818 12:06:34.507105    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.507242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.507350    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507450    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507537    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.507661    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:34.507812    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:34.507820    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:06:34.567906    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007994.725838648
	
	I0818 12:06:34.567925    3824 fix.go:216] guest clock: 1724007994.725838648
	I0818 12:06:34.567930    3824 fix.go:229] Guest: 2024-08-18 12:06:34.725838648 -0700 PDT Remote: 2024-08-18 12:06:34.507094 -0700 PDT m=+122.564244892 (delta=218.744648ms)
	I0818 12:06:34.567943    3824 fix.go:200] guest clock delta is within tolerance: 218.744648ms
	I0818 12:06:34.567946    3824 start.go:83] releasing machines lock for "ha-373000-m03", held for 37.547576549s
	I0818 12:06:34.567963    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.568094    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:34.591371    3824 out.go:177] * Found network options:
	I0818 12:06:34.612327    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0818 12:06:34.633268    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.633293    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.633308    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633777    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633931    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.634012    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:06:34.634042    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	W0818 12:06:34.634075    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.634099    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.634164    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:06:34.634177    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.634183    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634314    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634342    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634432    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634570    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.634589    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634716    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	W0818 12:06:34.668553    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:06:34.668615    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:06:34.719514    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:06:34.719537    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.719641    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:34.736086    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:06:34.744327    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:06:34.752345    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:06:34.752395    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:06:34.760474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.768546    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:06:34.776560    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.784665    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:06:34.792933    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:06:34.801000    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:06:34.809207    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:06:34.817499    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:06:34.824699    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:06:34.832081    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:34.922497    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:06:34.942245    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.942318    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:06:34.961594    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:34.977959    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:06:34.994785    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:35.006539    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.017278    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:06:35.039389    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.050815    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:35.065658    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:06:35.068495    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:06:35.078248    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:06:35.092006    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:06:35.191577    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:06:35.301568    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:06:35.301599    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:06:35.317876    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:35.413915    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:06:37.731416    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.317550809s)
	I0818 12:06:37.731481    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:06:37.741565    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:37.751381    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:06:37.845484    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:06:37.959362    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.068888    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:06:38.082534    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:38.093177    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.188351    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:06:38.252978    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:06:38.253055    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:06:38.257331    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:06:38.257383    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:06:38.260636    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:06:38.285125    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:06:38.285203    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.303582    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.341530    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:06:38.415385    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:06:38.457289    3824 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0818 12:06:38.478242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:38.478613    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:06:38.483129    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:38.492823    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:06:38.493001    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:38.493248    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.493270    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.502531    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51847
	I0818 12:06:38.502982    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.503380    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.503398    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.503603    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.503720    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:06:38.503806    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:06:38.503908    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:06:38.504863    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:06:38.505136    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.505159    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.514076    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51849
	I0818 12:06:38.514417    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.514734    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.514748    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.514977    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.515088    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:06:38.515180    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.7
	I0818 12:06:38.515186    3824 certs.go:194] generating shared ca certs ...
	I0818 12:06:38.515198    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:06:38.515378    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:06:38.515454    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:06:38.515480    3824 certs.go:256] generating profile certs ...
	I0818 12:06:38.515601    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:06:38.515691    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.a796c580
	I0818 12:06:38.515764    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:06:38.515772    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:06:38.515792    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:06:38.515811    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:06:38.515836    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:06:38.515854    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:06:38.515881    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:06:38.515909    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:06:38.515932    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:06:38.516021    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:06:38.516070    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:06:38.516079    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:06:38.516113    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:06:38.516146    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:06:38.516176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:06:38.516242    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:38.516275    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.516297    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.516315    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:06:38.516339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:06:38.516428    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:06:38.516506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:06:38.516591    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:06:38.516676    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:06:38.545460    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:06:38.549008    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:06:38.556894    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:06:38.559945    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:06:38.573932    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:06:38.577300    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:06:38.585295    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:06:38.588495    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:06:38.596413    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:06:38.600019    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:06:38.608205    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:06:38.612275    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:06:38.620061    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:06:38.640273    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:06:38.660114    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:06:38.679901    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:06:38.699819    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:06:38.718980    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:06:38.739258    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:06:38.759233    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:06:38.779159    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:06:38.799128    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:06:38.819459    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:06:38.839485    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:06:38.853931    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:06:38.867660    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:06:38.881016    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:06:38.894734    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:06:38.908655    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:06:38.922215    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:06:38.936152    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:06:38.940292    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:06:38.948670    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.951984    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.952025    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.956301    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:06:38.964945    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:06:38.973410    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976837    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976884    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.980998    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:06:38.989539    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:06:38.998105    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001464    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001509    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.005796    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:06:39.014114    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:06:39.017475    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:06:39.021708    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:06:39.025941    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:06:39.030326    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:06:39.034611    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:06:39.038815    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:06:39.043094    3824 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0818 12:06:39.043154    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:06:39.043171    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:06:39.043216    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:06:39.056006    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:06:39.056050    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:06:39.056106    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:06:39.064688    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:06:39.064746    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:06:39.073725    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:06:39.087281    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:06:39.101247    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:06:39.115342    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:06:39.118445    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:39.127826    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.220452    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.236932    3824 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:06:39.237124    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:39.258433    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:06:39.298999    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.406166    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.422783    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:06:39.423042    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:06:39.423091    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:06:39.423285    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.423367    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.423379    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.423392    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.423403    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.425980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.924516    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.924530    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.924537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.924541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.927146    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.927756    3824 node_ready.go:49] node "ha-373000-m03" has status "Ready":"True"
	I0818 12:06:39.927766    3824 node_ready.go:38] duration metric: took 504.486873ms for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.927772    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:06:39.927816    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:06:39.927826    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.927832    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.927835    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.932950    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:39.939217    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.939280    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:06:39.939289    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.939296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.939299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.942170    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.942704    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.942712    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.942718    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.942722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945194    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.945502    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.945513    3824 pod_ready.go:82] duration metric: took 6.280436ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945527    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945573    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:06:39.945579    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.945596    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.947744    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.948231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.948239    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.948244    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.948249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.949935    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.950306    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.950316    3824 pod_ready.go:82] duration metric: took 4.783283ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950324    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950360    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:06:39.950366    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.950371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.950376    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952196    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.952623    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.952632    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.952637    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954395    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.954700    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.954713    3824 pod_ready.go:82] duration metric: took 4.380752ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954728    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954770    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:06:39.954775    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.954781    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954784    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.956816    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.957264    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:39.957272    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.957278    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.957281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.958954    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.959393    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.959403    3824 pod_ready.go:82] duration metric: took 4.669444ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.959410    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:40.124592    3824 request.go:632] Waited for 165.145751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124629    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124633    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.124639    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.124645    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.127273    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.325487    3824 request.go:632] Waited for 197.85948ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325576    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.325592    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.325603    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.328610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.524678    3824 request.go:632] Waited for 64.314725ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524787    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.524794    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.524800    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.534379    3824 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0818 12:06:40.724687    3824 request.go:632] Waited for 189.641273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724767    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724780    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.724790    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.724795    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.727857    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:40.960310    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.960323    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.960330    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.960334    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.962980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.125004    3824 request.go:632] Waited for 161.489984ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125051    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125059    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.125068    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.125074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.127660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.459552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.459565    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.459572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.459576    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.462348    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.524806    3824 request.go:632] Waited for 61.84167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524878    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524889    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.524897    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.524902    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.527287    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.959574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.959588    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.959594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.959599    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962051    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.962553    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.962563    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.962570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962588    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.964779    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.965088    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:42.461485    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.461498    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.461504    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.461507    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.463825    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.464339    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.464350    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.464358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.464363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.466190    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:42.960283    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.960301    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.960308    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.960313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.962745    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.963399    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.963408    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.963415    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.963420    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.965667    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.460941    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.460961    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.460973    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.460980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.464358    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:43.464865    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.464876    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.464885    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.464903    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.466644    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.960616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.960635    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.960662    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.960670    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.963241    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.963592    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.963599    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.963605    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.963609    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.965295    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.965679    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:44.459655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.459670    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.459678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.459684    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.462938    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.463437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.463446    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.463453    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.463456    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.465455    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:44.960738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.960764    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.960775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.960781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.964513    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.965181    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.965189    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.965195    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.965198    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.967125    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.459544    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.459557    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.459564    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.459567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.461789    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.462287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.462295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.462301    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.462304    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.463842    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.959866    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.959882    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.959891    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.959895    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962334    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.962673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.962680    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.962686    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962691    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.964328    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:46.460263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.460278    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.460302    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.460307    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.462738    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.463273    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.463281    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.463287    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.463290    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.465376    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.465623    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:46.960651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.960728    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.960746    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.960756    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.963413    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.963863    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.963871    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.963877    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.963879    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.965522    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.460546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.460559    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.460565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.460569    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462347    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.462831    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.462839    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.462845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462849    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.465797    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:47.959568    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.959595    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.959606    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.959613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.962968    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:47.963654    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.963665    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.963673    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.963678    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.965348    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.460843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.460865    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.460878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.460888    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.464226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.464806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.464814    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.464820    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.464824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.466523    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.466821    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:48.960506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.960532    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.960544    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.960549    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.964130    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.964586    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.964596    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.964604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.964610    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.966425    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:49.459390    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.459415    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.459427    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.459433    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.463245    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.463769    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.463781    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.463788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.463792    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.466543    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:49.959537    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.959561    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.959571    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.959577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.962607    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.963064    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.963072    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.963077    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.963081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.964839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:50.460746    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.460763    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.460770    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.460773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.463380    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.463793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.463801    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.463807    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.463810    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.466499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.466793    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:50.960528    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.960552    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.960563    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.960569    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.964095    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:50.964754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.964765    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.964773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.964779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.966674    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.459276    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.459296    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.459307    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.459323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.462737    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.463318    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.463325    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.463331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.463342    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.465140    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.960158    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.960178    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.960190    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.960196    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.963615    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.964184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.964194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.964201    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.964208    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.966317    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.459260    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.459275    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.459284    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.459299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.461808    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.462199    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.462207    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.462214    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.462217    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.464015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:52.959295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.959313    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.959324    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.959330    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.963923    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:52.964435    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.964443    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.964449    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.964452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.967830    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:52.968298    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:53.459316    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.459335    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.459343    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.459349    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.464675    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.465233    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.465241    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.465248    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.465251    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.470328    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.960317    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.960343    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.960354    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.960360    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.964420    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:53.965229    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.965236    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.965242    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.965246    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.967660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.459303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.459315    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.459321    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.459324    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.461902    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.462298    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.462305    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.462310    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.462313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.464747    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.960293    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.960319    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.960331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.960339    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.963847    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:54.964473    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.964483    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.964491    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.964497    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.966299    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.459778    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.459804    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.459816    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.459824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.463395    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.464072    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.464083    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.464091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.464095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.465859    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.466228    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:55.959274    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.959295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.959306    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.959313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.962842    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.963214    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.963221    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.963227    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.963230    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.964851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.459680    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.459702    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.459713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.459719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.463508    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.463978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.463986    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.463993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.463996    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.465851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.959108    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.959168    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.959180    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.959188    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.962593    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.963101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.963111    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.963119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.963124    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.964734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.458993    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.459009    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.459033    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.459044    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.461199    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.461630    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.461638    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.461644    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.461647    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.464799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:57.959429    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.959455    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.959466    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.959471    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962366    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.962731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.962739    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.962745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962748    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.964355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.964866    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:58.459677    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.459697    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.459709    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.459714    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.463092    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.463794    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.463802    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.463809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.463811    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.465563    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.959591    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.959612    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.959623    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.959631    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.963002    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.964342    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.964361    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.964371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.964377    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.966371    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.966690    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.966699    3824 pod_ready.go:82] duration metric: took 19.007875373s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966710    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966744    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:06:58.966749    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.966754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.966759    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.968551    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.969049    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.969056    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.969062    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.969065    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.970647    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.971055    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.971063    3824 pod_ready.go:82] duration metric: took 4.347127ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971069    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971100    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:06:58.971105    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.971110    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.971116    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.972830    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.973265    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:58.973273    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.973279    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.973282    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.974809    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.975155    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.975165    3824 pod_ready.go:82] duration metric: took 4.091205ms for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975172    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975209    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:06:58.975214    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.975219    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.975223    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.976734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.977185    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.977194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.977199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.977203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.978595    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.978942    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.978951    3824 pod_ready.go:82] duration metric: took 3.77353ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978957    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978988    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:06:58.978993    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.978999    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.979003    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.980398    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.980845    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.980852    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.980858    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.980861    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.982260    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.982600    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.982608    3824 pod_ready.go:82] duration metric: took 3.645796ms for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.982614    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.160214    3824 request.go:632] Waited for 177.557781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160314    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.160334    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.160341    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.163272    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.360510    3824 request.go:632] Waited for 196.433912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360620    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360630    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.360640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.360649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.364048    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.364505    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.364516    3824 pod_ready.go:82] duration metric: took 381.90816ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.364525    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.559640    3824 request.go:632] Waited for 195.079426ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559705    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.559711    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.559715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.561728    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.760676    3824 request.go:632] Waited for 198.422535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760742    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.760754    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.760761    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.764272    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.764909    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.764919    3824 pod_ready.go:82] duration metric: took 400.401698ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.764926    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.960270    3824 request.go:632] Waited for 195.290695ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960398    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960409    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.960422    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.960432    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.963585    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.161284    3824 request.go:632] Waited for 197.152508ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161348    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161357    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.161364    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.161368    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.163499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.163968    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.163978    3824 pod_ready.go:82] duration metric: took 399.059814ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.163984    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.360550    3824 request.go:632] Waited for 196.524224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360645    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360674    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.360705    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.360715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.364230    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.559710    3824 request.go:632] Waited for 194.892476ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559760    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.559767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.559770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.561706    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:00.562031    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.562041    3824 pod_ready.go:82] duration metric: took 398.063984ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.562048    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.760849    3824 request.go:632] Waited for 198.76912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760881    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760887    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.760893    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.760897    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.763176    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.959686    3824 request.go:632] Waited for 195.875972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959818    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959837    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.959848    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.959855    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.963072    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.963632    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.963645    3824 pod_ready.go:82] duration metric: took 401.603061ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.963654    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.160451    3824 request.go:632] Waited for 196.719541ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160515    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.160526    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.160534    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.163885    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.360939    3824 request.go:632] Waited for 196.415223ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361054    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361063    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.361074    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.361081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.364720    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.365356    3824 pod_ready.go:98] node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365374    3824 pod_ready.go:82] duration metric: took 401.724878ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	E0818 12:07:01.365383    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365389    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.560679    3824 request.go:632] Waited for 195.242196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560732    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.560740    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.560745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.562645    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:01.761089    3824 request.go:632] Waited for 198.042947ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761200    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.761212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.761218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.764398    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.764800    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:01.764826    3824 pod_ready.go:82] duration metric: took 399.443504ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.764834    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.959600    3824 request.go:632] Waited for 194.717673ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959662    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.959672    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.959678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.963127    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.159886    3824 request.go:632] Waited for 196.172195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159958    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159975    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.159988    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.159997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.163322    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.163764    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.163775    3824 pod_ready.go:82] duration metric: took 398.944902ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.163781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.359608    3824 request.go:632] Waited for 195.759022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359664    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359677    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.359715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.359722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.363386    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.560395    3824 request.go:632] Waited for 196.314469ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560474    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560483    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.560491    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.560495    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.563041    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:02.563443    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.563453    3824 pod_ready.go:82] duration metric: took 399.678634ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.563460    3824 pod_ready.go:39] duration metric: took 22.636385926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:07:02.563470    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:07:02.563523    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:07:02.576904    3824 api_server.go:72] duration metric: took 23.340671308s to wait for apiserver process to appear ...
	I0818 12:07:02.576917    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:07:02.576928    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:07:02.581021    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:07:02.581063    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:07:02.581069    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.581075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.581080    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.581650    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:07:02.581745    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:07:02.581754    3824 api_server.go:131] duration metric: took 4.833461ms to wait for apiserver health ...
	I0818 12:07:02.581759    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:07:02.760273    3824 request.go:632] Waited for 178.46854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760344    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760352    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.760358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.760361    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.765147    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:07:02.770514    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:07:02.770527    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:02.770531    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:02.770534    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:02.770537    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:02.770539    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:02.770545    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:02.770549    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:02.770552    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:02.770556    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:02.770558    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:02.770561    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:02.770564    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:02.770566    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:02.770570    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:02.770573    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:02.770577    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:02.770580    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:02.770583    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:02.770585    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:02.770588    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:02.770590    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:02.770593    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:02.770596    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:02.770598    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:02.770601    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:02.770603    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:02.770607    3824 system_pods.go:74] duration metric: took 188.849851ms to wait for pod list to return data ...
	I0818 12:07:02.770613    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:07:02.959522    3824 request.go:632] Waited for 188.86655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959587    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.959598    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.959608    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.963054    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.963263    3824 default_sa.go:45] found service account: "default"
	I0818 12:07:02.963277    3824 default_sa.go:55] duration metric: took 192.665025ms for default service account to be created ...
	I0818 12:07:02.963284    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:07:03.160239    3824 request.go:632] Waited for 196.905811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160320    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160329    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.160341    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.160363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.165404    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:07:03.170694    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:07:03.170706    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:03.170710    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:03.170714    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:03.170717    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:03.170720    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:03.170723    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:03.170725    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:03.170728    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:03.170731    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:03.170733    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:03.170737    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:03.170740    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:03.170743    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:03.170746    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:03.170749    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:03.170752    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:03.170755    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:03.170757    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:03.170760    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:03.170763    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:03.170765    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:03.170769    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:03.170772    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:03.170774    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:03.170777    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:03.170779    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:03.170784    3824 system_pods.go:126] duration metric: took 207.500936ms to wait for k8s-apps to be running ...
	I0818 12:07:03.170789    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:07:03.170841    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:07:03.182482    3824 system_svc.go:56] duration metric: took 11.680891ms WaitForService to wait for kubelet
	I0818 12:07:03.182502    3824 kubeadm.go:582] duration metric: took 23.946290558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:07:03.182518    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:07:03.360851    3824 request.go:632] Waited for 178.265424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360972    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360984    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.360994    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.361004    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.364644    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:03.365979    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365989    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.365996    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365999    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366002    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366005    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366008    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366011    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366014    3824 node_conditions.go:105] duration metric: took 183.498142ms to run NodePressure ...
	I0818 12:07:03.366022    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:07:03.366037    3824 start.go:255] writing updated cluster config ...
	I0818 12:07:03.387453    3824 out.go:201] 
	I0818 12:07:03.408870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:03.408996    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.431363    3824 out.go:177] * Starting "ha-373000-m04" worker node in "ha-373000" cluster
	I0818 12:07:03.473303    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:07:03.473331    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:07:03.473487    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:07:03.473500    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:07:03.473589    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.474432    3824 start.go:360] acquireMachinesLock for ha-373000-m04: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:07:03.474523    3824 start.go:364] duration metric: took 71.686µs to acquireMachinesLock for "ha-373000-m04"
	I0818 12:07:03.474542    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:07:03.474548    3824 fix.go:54] fixHost starting: m04
	I0818 12:07:03.474855    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:07:03.474882    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:07:03.484549    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51853
	I0818 12:07:03.484938    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:07:03.485323    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:07:03.485338    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:07:03.485563    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:07:03.485683    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.485781    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:07:03.485864    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.485969    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3421
	I0818 12:07:03.486880    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid 3421 missing from process table
	I0818 12:07:03.486901    3824 fix.go:112] recreateIfNeeded on ha-373000-m04: state=Stopped err=<nil>
	I0818 12:07:03.486912    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	W0818 12:07:03.486988    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:07:03.508504    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m04" ...
	I0818 12:07:03.582318    3824 main.go:141] libmachine: (ha-373000-m04) Calling .Start
	I0818 12:07:03.582606    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.582712    3824 main.go:141] libmachine: (ha-373000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid
	I0818 12:07:03.582838    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Using UUID 421610dc-2abf-427c-8c2b-c85701e511a2
	I0818 12:07:03.610902    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Generated MAC f2:8c:91:ee:dd:c0
	I0818 12:07:03.610923    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:07:03.611054    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611081    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611126    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "421610dc-2abf-427c-8c2b-c85701e511a2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:07:03.611176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 421610dc-2abf-427c-8c2b-c85701e511a2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:07:03.611189    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:07:03.612626    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Pid is 3877
	I0818 12:07:03.613079    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Attempt 0
	I0818 12:07:03.613097    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.613147    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3877
	I0818 12:07:03.614336    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Searching for f2:8c:91:ee:dd:c0 in /var/db/dhcpd_leases ...
	I0818 12:07:03.614413    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:07:03.614438    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c3979e}
	I0818 12:07:03.614464    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:07:03.614488    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:07:03.614500    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:07:03.614507    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found match: f2:8c:91:ee:dd:c0
	I0818 12:07:03.614515    3824 main.go:141] libmachine: (ha-373000-m04) DBG | IP: 192.169.0.8
	I0818 12:07:03.614531    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetConfigRaw
	I0818 12:07:03.615303    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:03.615492    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.615967    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:07:03.615979    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.616121    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:03.616256    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:03.616397    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616508    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616609    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:03.616727    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:03.616882    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:03.616892    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:07:03.621176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:07:03.629669    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:07:03.630674    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:03.630697    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:03.630709    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:03.630724    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.012965    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:07:04.012987    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:07:04.127720    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:04.127750    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:04.127760    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:04.127778    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.128559    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:07:04.128569    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:07:09.784251    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:07:09.784338    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:07:09.784350    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:07:09.808163    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:07:14.674465    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:07:14.674484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674657    3824 buildroot.go:166] provisioning hostname "ha-373000-m04"
	I0818 12:07:14.674669    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674755    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.674835    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.674920    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675008    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675105    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.675237    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.675389    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.675398    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m04 && echo "ha-373000-m04" | sudo tee /etc/hostname
	I0818 12:07:14.738016    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m04
	
	I0818 12:07:14.738030    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.738166    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.738262    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738354    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738444    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.738575    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.738730    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.738742    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:07:14.800929    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:07:14.800946    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:07:14.800959    3824 buildroot.go:174] setting up certificates
	I0818 12:07:14.800965    3824 provision.go:84] configureAuth start
	I0818 12:07:14.800972    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.801115    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:14.801241    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.801327    3824 provision.go:143] copyHostCerts
	I0818 12:07:14.801357    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801411    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:07:14.801417    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801581    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:07:14.801805    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801837    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:07:14.801842    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801922    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:07:14.802072    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802105    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:07:14.802110    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802180    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:07:14.802329    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m04 san=[127.0.0.1 192.169.0.8 ha-373000-m04 localhost minikube]
	I0818 12:07:15.264268    3824 provision.go:177] copyRemoteCerts
	I0818 12:07:15.264318    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:07:15.264333    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.264514    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.264635    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.264736    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.264840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:15.297241    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:07:15.297314    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:07:15.317451    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:07:15.317516    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:07:15.337321    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:07:15.337400    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:07:15.357216    3824 provision.go:87] duration metric: took 556.258633ms to configureAuth
	I0818 12:07:15.357236    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:07:15.357403    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:15.357417    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:15.357555    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.357641    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.357721    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357806    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.357993    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.358121    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.358132    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:07:15.410788    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:07:15.410801    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:07:15.410873    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:07:15.410885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.411015    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.411098    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411194    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411280    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.411394    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.411541    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.411587    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:07:15.476241    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:07:15.476261    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.476401    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.476490    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476597    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.476838    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.476977    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.476990    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:07:17.071913    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:07:17.071932    3824 machine.go:96] duration metric: took 13.456373306s to provisionDockerMachine
	I0818 12:07:17.071939    3824 start.go:293] postStartSetup for "ha-373000-m04" (driver="hyperkit")
	I0818 12:07:17.071946    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:07:17.071960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.072162    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:07:17.072176    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.072278    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.072367    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.072484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.072586    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.114832    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:07:17.118934    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:07:17.118950    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:07:17.119044    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:07:17.119187    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:07:17.119194    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:07:17.119347    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:07:17.131072    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:07:17.162572    3824 start.go:296] duration metric: took 90.627646ms for postStartSetup
	I0818 12:07:17.162595    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.162766    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:07:17.162780    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.162865    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.162946    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.163031    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.163111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.196597    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:07:17.196659    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:07:17.249652    3824 fix.go:56] duration metric: took 13.775528593s for fixHost
	I0818 12:07:17.249680    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.249818    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.249905    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.249992    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.250086    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.250222    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:17.250363    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:17.250370    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:07:17.303909    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008037.336410727
	
	I0818 12:07:17.303922    3824 fix.go:216] guest clock: 1724008037.336410727
	I0818 12:07:17.303927    3824 fix.go:229] Guest: 2024-08-18 12:07:17.336410727 -0700 PDT Remote: 2024-08-18 12:07:17.249669 -0700 PDT m=+165.308150896 (delta=86.741727ms)
	I0818 12:07:17.303937    3824 fix.go:200] guest clock delta is within tolerance: 86.741727ms
	I0818 12:07:17.303941    3824 start.go:83] releasing machines lock for "ha-373000-m04", held for 13.829839932s
	I0818 12:07:17.303960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.304093    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:17.325783    3824 out.go:177] * Found network options:
	I0818 12:07:17.347322    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0818 12:07:17.368151    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368179    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368192    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.368225    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368728    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368862    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368947    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:07:17.368991    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	W0818 12:07:17.369043    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369069    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369086    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.369158    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:07:17.369174    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369197    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.369352    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369370    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369488    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369507    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369677    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.369697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369814    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	W0818 12:07:17.399808    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:07:17.399874    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:07:17.453508    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:07:17.453527    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.453602    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.468947    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:07:17.477909    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:07:17.486368    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:07:17.486429    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:07:17.495070    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.503908    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:07:17.512255    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.520784    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:07:17.529449    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:07:17.538408    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:07:17.546916    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:07:17.555361    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:07:17.562930    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:07:17.571624    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:17.670212    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:07:17.690532    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.690608    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:07:17.710894    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.721349    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:07:17.738837    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.750943    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.762092    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:07:17.786808    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.798198    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.813512    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:07:17.816407    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:07:17.824320    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:07:17.838071    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:07:17.938835    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:07:18.032593    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:07:18.032616    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:07:18.046682    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:18.149082    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:08:19.094745    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.947540366s)
	I0818 12:08:19.094811    3824 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:08:19.130194    3824 out.go:201] 
	W0818 12:08:19.167950    3824 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:07:15 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.789565294Z" level=info msg="Starting up"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.790497979Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.791060023Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.808949895Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.823962995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824017555Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824063133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824074046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824245628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824285399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824412941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824458745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824472526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824481113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824628618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824862154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826539571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826578591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826700099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826735930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826894261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826943257Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828221494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828269425Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828283877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828294494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828306440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828355173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828863798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828968570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829012385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829087106Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829133358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829171270Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829205360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829239671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829274394Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829307961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829340520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829370638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829531056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829845805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829883191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829896300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829908724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829919786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829928151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829938442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829947500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829958637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829966701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829975548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830016884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830031620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830069034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830080580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830090618Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830119633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830130594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830138753Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830147234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830156530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830165223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830172746Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830327211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830423458Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830503251Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830581618Z" level=info msg="containerd successfully booted in 0.022620s"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.817938076Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.831116800Z" level=info msg="Loading containers: start."
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.929784593Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.991389466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.063078080Z" level=info msg="Loading containers: done."
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074071701Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074231517Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097399297Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097566032Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:07:17 ha-373000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:07:18 ha-373000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.209129651Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210124874Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210325925Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210407877Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210420112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:19 ha-373000-m04 dockerd[1176]: time="2024-08-18T19:07:19.260443864Z" level=info msg="Starting up"
	Aug 18 19:08:19 ha-373000-m04 dockerd[1176]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:08:19.168043    3824 out.go:270] * 
	W0818 12:08:19.169228    3824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:08:19.232626    3824 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.397909257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.400610172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1f2fb60f7c58ea2a794ed7b3890a722b7e02d695c8b7d8be84e17d817f22ff/resolv.conf as [nameserver 192.169.0.1]"
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3772c138aa65e84e76733835788a3b5c8c0f94bde29eaad82c89e1b944ad3bff/resolv.conf as [nameserver 192.169.0.1]"
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550381570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550475397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550487498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550588600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb4ed9664dda977cc9b021fafae44e8ee00272a594ba9ddcb993b4d0d5f0db6f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611318900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611621875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611734513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.612037359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.725056501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.725946033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.726057340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.726259789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:06:13 ha-373000 dockerd[1161]: time="2024-08-18T19:06:13.034511897Z" level=info msg="ignoring event" container=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034748077Z" level=info msg="shim disconnected" id=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b namespace=moby
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034780713Z" level=warning msg="cleaning up after shim disconnected" id=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b namespace=moby
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034787207Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423655859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423798647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423827418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423965192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eb459a6cac5c5       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	b857c2fef140c       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   3772c138aa65e       storage-provisioner
	09b8ded75e80f       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e                                                                                         2 minutes ago        Running             kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	2848cdc0e8c15       045733566833c                                                                                         2 minutes ago        Running             kube-controller-manager   2                   76a884a77895b       kube-controller-manager-ha-373000
	ebe78e53d91d8       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0                                                                                         3 minutes ago        Running             kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	8d1b9f96928b6       604f5db92eaa8                                                                                         3 minutes ago        Running             kube-apiserver            1                   c3ec38b5b8b88       kube-apiserver-ha-373000
	91e90de8fe34f       045733566833c                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   76a884a77895b       kube-controller-manager-ha-373000
	e4c8538956c47       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago        Exited              busybox                   0                   d600143e2a2b0       busybox-7dff88458-hdg8r
	a183c1159f971       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   ad6105cce447d       coredns-6f6b679f8f-hv98f
	aa4d1e9b3fb56       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   238410437a3ad       coredns-6f6b679f8f-rcfmc
	0d55a0eeb67f5       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago        Exited              kindnet-cni               0                   462acdf375c7b       kindnet-k4c4p
	8493354682ea9       ad83b2ca7b09e                                                                                         8 minutes ago        Exited              kube-proxy                0                   4ea29595ff287       kube-proxy-2xkhp
	da35cb184d7df       604f5db92eaa8                                                                                         9 minutes ago        Exited              kube-apiserver            0                   af987f19793c3       kube-apiserver-ha-373000
	311485d219660       2e96e5913fc06                                                                                         9 minutes ago        Exited              etcd                      0                   7a32c93f32a9c       etcd-ha-373000
	807d80bec4e45       1766f54c897f0                                                                                         9 minutes ago        Exited              kube-scheduler            0                   26832128bdd4d       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a183c1159f97] <==
	[INFO] 10.244.0.4:47320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000489917s
	[INFO] 10.244.2.2:54669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012678s
	[INFO] 10.244.1.2:43705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009229s
	[INFO] 10.244.1.2:54355 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011165041s
	[INFO] 10.244.1.2:33518 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010428983s
	[INFO] 10.244.0.4:45605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084046s
	[INFO] 10.244.0.4:50628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145592s
	[INFO] 10.244.0.4:33161 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126048s
	[INFO] 10.244.2.2:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121734s
	[INFO] 10.244.2.2:58873 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101169s
	[INFO] 10.244.2.2:50099 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070705s
	[INFO] 10.244.1.2:54977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124537s
	[INFO] 10.244.1.2:43577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073582s
	[INFO] 10.244.0.4:46803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000059521s
	[INFO] 10.244.0.4:59171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040726s
	[INFO] 10.244.2.2:39966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105714s
	[INFO] 10.244.2.2:51946 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007014s
	[INFO] 10.244.1.2:51245 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084678s
	[INFO] 10.244.1.2:40537 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069547s
	[INFO] 10.244.0.4:36306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081884s
	[INFO] 10.244.2.2:41973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000065341s
	[INFO] 10.244.2.2:57971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083126s
	[INFO] 10.244.2.2:43658 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000062409s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa4d1e9b3fb5] <==
	[INFO] 10.244.1.2:45157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149452s
	[INFO] 10.244.0.4:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109468s
	[INFO] 10.244.0.4:38953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110641s
	[INFO] 10.244.0.4:41701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000625324s
	[INFO] 10.244.0.4:54986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090319s
	[INFO] 10.244.0.4:44918 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046498s
	[INFO] 10.244.2.2:55873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125265s
	[INFO] 10.244.2.2:36969 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098884s
	[INFO] 10.244.2.2:37588 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000041509s
	[INFO] 10.244.2.2:39779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000038465s
	[INFO] 10.244.2.2:58973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072931s
	[INFO] 10.244.1.2:46606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012235s
	[INFO] 10.244.1.2:55528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071719s
	[INFO] 10.244.0.4:43575 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034973s
	[INFO] 10.244.0.4:55874 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073156s
	[INFO] 10.244.2.2:45694 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050926s
	[INFO] 10.244.2.2:37999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097175s
	[INFO] 10.244.1.2:39004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108935s
	[INFO] 10.244.1.2:45716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148013s
	[INFO] 10.244.0.4:40729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079121s
	[INFO] 10.244.0.4:38794 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057287s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000119698s
	[INFO] 10.244.2.2:38231 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048609s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-373000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T11_59_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:59:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 19:05:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-373000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be8c970205d64d6f8d4700f55fd439c4
	  System UUID:                2f6e4f9b-0000-0000-8f55-d5f48a14c3df
	  Boot ID:                    bfd69bae-ba72-43fb-b7a0-1130a86ddec9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdg8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 coredns-6f6b679f8f-hv98f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m50s
	  kube-system                 coredns-6f6b679f8f-rcfmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m50s
	  kube-system                 etcd-ha-373000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m55s
	  kube-system                 kindnet-k4c4p                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m51s
	  kube-system                 kube-apiserver-ha-373000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-controller-manager-ha-373000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-proxy-2xkhp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 kube-scheduler-ha-373000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-vip-ha-373000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m49s                  kube-proxy       
	  Normal  Starting                 2m38s                  kube-proxy       
	  Normal  Starting                 8m55s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m55s                  kubelet          Node ha-373000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s                  kubelet          Node ha-373000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s                  kubelet          Node ha-373000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m52s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  NodeReady                8m33s                  kubelet          Node ha-373000 status is now: NodeReady
	  Normal  RegisteredNode           7m44s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  NodeHasSufficientMemory  3m31s (x8 over 3m31s)  kubelet          Node ha-373000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m31s (x8 over 3m31s)  kubelet          Node ha-373000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m31s (x7 over 3m31s)  kubelet          Node ha-373000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           2m37s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	
	
	Name:               ha-373000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T12_00_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:08:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-373000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dd1525a71ed4e34854a0717875d7974
	  System UUID:                7a234b98-0000-0000-a476-83254bfde967
	  Boot ID:                    4f102243-3831-4b64-8d3d-63e4676f5c43
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85gjs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 etcd-ha-373000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m49s
	  kube-system                 kindnet-q7ghp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m52s
	  kube-system                 kube-apiserver-ha-373000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-controller-manager-ha-373000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-proxy-5hg88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 kube-scheduler-ha-373000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-vip-ha-373000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m2s                   kube-proxy       
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 7m47s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m52s (x8 over 7m52s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m52s (x8 over 7m52s)  kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m52s (x7 over 7m52s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m47s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           6m31s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   Starting                 4m35s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m35s                  kubelet          Node ha-373000-m02 has been rebooted, boot id: cadd9b91-3eb1-4a50-944d-943942f3c889
	  Normal   NodeHasSufficientPID     4m35s                  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m35s                  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m35s                  kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m59s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           2m37s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	
	
	Name:               ha-373000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T12_01_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:06:39 +0000   Sun, 18 Aug 2024 19:06:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:06:39 +0000   Sun, 18 Aug 2024 19:06:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:06:39 +0000   Sun, 18 Aug 2024 19:06:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:06:39 +0000   Sun, 18 Aug 2024 19:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-373000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 39c76ca250bb4974bbb6ad00c4a76f4b
	  System UUID:                94c34aaf-0000-0000-9127-b4e2c0237480
	  Boot ID:                    cf2a50ab-9184-48cf-a911-6a12029ca86b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hxp7z                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 etcd-ha-373000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m37s
	  kube-system                 kindnet-wxcx9                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-apiserver-ha-373000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-controller-manager-ha-373000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-proxy-bprqp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 kube-scheduler-ha-373000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-vip-ha-373000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 98s                    kube-proxy       
	  Normal   Starting                 6m34s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  6m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m39s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   NodeHasSufficientMemory  6m39s (x8 over 6m39s)  kubelet          Node ha-373000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m39s (x8 over 6m39s)  kubelet          Node ha-373000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m39s (x7 over 6m39s)  kubelet          Node ha-373000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m37s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   RegisteredNode           6m31s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   RegisteredNode           2m59s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   RegisteredNode           2m37s                  node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	  Normal   NodeNotReady             2m19s                  node-controller  Node ha-373000-m03 status is now: NodeNotReady
	  Normal   Starting                 102s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  102s                   kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 102s                   kubelet          Node ha-373000-m03 has been rebooted, boot id: cf2a50ab-9184-48cf-a911-6a12029ca86b
	  Normal   NodeHasSufficientMemory  102s (x2 over 102s)    kubelet          Node ha-373000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x2 over 102s)    kubelet          Node ha-373000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x2 over 102s)    kubelet          Node ha-373000-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                102s                   kubelet          Node ha-373000-m03 status is now: NodeReady
	  Normal   RegisteredNode           95s                    node-controller  Node ha-373000-m03 event: Registered Node ha-373000-m03 in Controller
	
	
	Name:               ha-373000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T12_02_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:04:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-373000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb7e1fc529fe4c47b8d9b40f3d1984a6
	  System UUID:                4216427c-0000-0000-8c2b-c85701e511a2
	  Boot ID:                    ffa27ed1-34bc-4ad1-a52d-6b7cdfc1b588
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2gf5h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m43s
	  kube-system                 kube-proxy-l7zlx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m44s)  kubelet          Node ha-373000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m43s (x2 over 5m44s)  kubelet          Node ha-373000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m43s (x2 over 5m44s)  kubelet          Node ha-373000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           5m41s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  NodeReady                5m20s                  kubelet          Node ha-373000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           2m59s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           2m37s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  NodeNotReady             2m19s                  node-controller  Node ha-373000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           95s                    node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035772] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008033] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.653667] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.700574] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244282] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.513719] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.099510] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.960582] systemd-fstab-generator[1091]: Ignoring "noauto" option for root device
	[  +0.269134] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.056589] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.053681] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.111453] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.427542] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.102430] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	[  +0.104882] systemd-fstab-generator[1397]: Ignoring "noauto" option for root device
	[  +0.113938] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[  +0.439463] systemd-fstab-generator[1572]: Ignoring "noauto" option for root device
	[  +6.887746] kauditd_printk_skb: 212 callbacks suppressed
	[Aug18 19:05] kauditd_printk_skb: 40 callbacks suppressed
	[Aug18 19:06] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [311485d21966] <==
	2024/08/18 19:04:24 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:04:24.294478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.811599644s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:04:24.297217Z","caller":"traceutil/trace.go:171","msg":"trace[1222647709] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"5.81433944s","start":"2024-08-18T19:04:18.482872Z","end":"2024-08-18T19:04:24.297212Z","steps":["trace[1222647709] 'agreement among raft nodes before linearized reading'  (duration: 5.811599988s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:04:24.297250Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:04:18.482838Z","time spent":"5.814390017s","remote":"127.0.0.1:50520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	2024/08/18 19:04:24 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:04:24.343451Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:04:24.343477Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:04:24.343545Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:04:24.344459Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344477Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344491Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344598Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344731Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344736Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.344743Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.344755Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345718Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345771Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345797Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345806Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.349335Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:04:24.349441Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:04:24.349451Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:06:20.774262Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:20.774347Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:22.714827Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:22.714899Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:24.776831Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:24.777192Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:27.715938Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:27.715964Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:28.780365Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:28.780551Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:32.716595Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:32.716632Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:32.781983Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:32.782034Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:36.783717Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:36.783804Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3dc5de516363476c","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:37.716886Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:06:37.717046Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-18T19:06:40.938257Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.947603Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.949815Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.996821Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3dc5de516363476c","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-18T19:06:40.996896Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:41.079829Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3dc5de516363476c","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-18T19:06:41.079911Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	
	
	==> kernel <==
	 19:08:22 up 3 min,  0 users,  load average: 0.24, 0.28, 0.12
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0d55a0eeb67f] <==
	I0818 19:03:45.806841       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:55.803194       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:03:55.803360       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:55.803667       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:03:55.803856       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:55.804295       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:03:55.804448       1 main.go:299] handling current node
	I0818 19:03:55.804713       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:03:55.804905       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:05.803401       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:04:05.803664       1 main.go:299] handling current node
	I0818 19:04:05.803807       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:04:05.804018       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:05.804411       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:04:05.804569       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:04:05.804783       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:04:05.804917       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:04:15.811869       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:04:15.811960       1 main.go:299] handling current node
	I0818 19:04:15.811993       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:04:15.811999       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:15.812247       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:04:15.812278       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:04:15.812456       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:04:15.812775       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:07:43.314765       1 main.go:299] handling current node
	I0818 19:07:53.314603       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:07:53.314965       1 main.go:299] handling current node
	I0818 19:07:53.315193       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:07:53.315333       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:07:53.315716       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:07:53.315835       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:07:53.316068       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:07:53.316208       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:03.321017       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:03.321072       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:03.321207       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:03.321352       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:03.321568       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:03.321711       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:03.321913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:03.321961       1 main.go:299] handling current node
	I0818 19:08:13.321058       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:13.321081       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:13.321201       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8d1b9f96928b] <==
	I0818 19:05:17.645616       1 controller.go:90] Starting OpenAPI V3 controller
	I0818 19:05:17.645726       1 naming_controller.go:294] Starting NamingConditionController
	I0818 19:05:17.646051       1 establishing_controller.go:81] Starting EstablishingController
	I0818 19:05:17.646230       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0818 19:05:17.646334       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0818 19:05:17.646408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0818 19:05:17.726468       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:05:17.726550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:05:17.726964       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:05:17.727055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:05:17.734198       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:05:17.740843       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:05:17.741048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:05:17.741237       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:05:17.741571       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:05:17.741772       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:05:17.741794       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:05:17.741804       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:05:17.741815       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:05:17.751519       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0818 19:05:17.755144       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:05:17.765153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:05:17.765474       1 policy_source.go:224] refreshing policies
	I0818 19:05:17.804728       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:05:18.643866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [da35cb184d7d] <==
	W0818 19:04:25.349322       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349369       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349419       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349531       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349595       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349641       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349733       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349810       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349834       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349904       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349973       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350044       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350113       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350180       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350212       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349915       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349988       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350066       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349814       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350182       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350401       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350426       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350445       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350464       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349746       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2848cdc0e8c1] <==
	I0818 19:05:44.811404       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0818 19:06:02.412098       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:02.417486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:02.426870       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:02.433964       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:02.519252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.391293ms"
	I0818 19:06:02.519509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="208.425µs"
	I0818 19:06:04.235466       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:07.567090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:14.339375       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:17.566337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:21.709020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="11.673956ms"
	I0818 19:06:21.713323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="106.07µs"
	I0818 19:06:21.735277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="12.863247ms"
	I0818 19:06:21.735967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="81.466µs"
	I0818 19:06:21.746428       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-ctkgn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-ctkgn\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 19:06:21.747565       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"313f6603-3b63-4bf8-b340-97d07580eb36", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-ctkgn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-ctkgn": the object has been modified; please apply your changes to the latest version and try again
	I0818 19:06:39.703806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:39.715043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:40.697923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.311µs"
	I0818 19:06:42.536516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:43.415265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.489084ms"
	I0818 19:06:43.415383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.297µs"
	I0818 19:06:46.960496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:47.058027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	
	
	==> kube-controller-manager [91e90de8fe34] <==
	I0818 19:04:58.710503       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:04:58.973279       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:04:58.973364       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:04:58.975212       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:04:58.975486       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:04:58.975569       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:04:58.976177       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:05:18.981143       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8493354682ea] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:59:31.957366       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:59:31.964975       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 18:59:31.965035       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:59:31.993827       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:59:31.993880       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:59:31.993899       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:59:31.995999       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:59:31.996318       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:59:31.996347       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:59:31.997862       1 config.go:197] "Starting service config controller"
	I0818 18:59:31.997906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:59:31.997955       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:59:31.997983       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:59:31.998642       1 config.go:326] "Starting node config controller"
	I0818 18:59:31.998670       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:59:32.098501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:59:32.098515       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:59:32.098852       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807d80bec4e4] <==
	W0818 18:59:24.091285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 18:59:24.091378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:59:24.208309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:59:24.208477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0818 18:59:26.678728       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:02:38.714766       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l8txd\": pod kube-proxy-l8txd is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l8txd" node="ha-373000-m04"
	E0818 19:02:38.714954       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l8txd\": pod kube-proxy-l8txd is already assigned to node \"ha-373000-m04\"" pod="kube-system/kube-proxy-l8txd"
	I0818 19:02:38.715132       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l8txd" node="ha-373000-m04"
	E0818 19:02:38.714987       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2gf5h\": pod kindnet-2gf5h is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2gf5h" node="ha-373000-m04"
	E0818 19:02:38.715342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ff15d17a-fb96-4721-847f-13f5c0e2613a(kube-system/kindnet-2gf5h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2gf5h"
	E0818 19:02:38.715353       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2gf5h\": pod kindnet-2gf5h is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-2gf5h"
	I0818 19:02:38.715361       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2gf5h" node="ha-373000-m04"
	E0818 19:02:38.735628       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7zlx\": pod kube-proxy-l7zlx is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7zlx" node="ha-373000-m04"
	E0818 19:02:38.735683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 853afdf8-598a-435c-8c48-233287580493(kube-system/kube-proxy-l7zlx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7zlx"
	E0818 19:02:38.735697       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7zlx\": pod kube-proxy-l7zlx is already assigned to node \"ha-373000-m04\"" pod="kube-system/kube-proxy-l7zlx"
	I0818 19:02:38.735708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7zlx" node="ha-373000-m04"
	E0818 19:02:38.736591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kg6jv\": pod kindnet-kg6jv is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kg6jv" node="ha-373000-m04"
	E0818 19:02:38.736671       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50964441-c762-4c22-8fd9-c3695b7291c5(kube-system/kindnet-kg6jv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kg6jv"
	E0818 19:02:38.736686       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kg6jv\": pod kindnet-kg6jv is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-kg6jv"
	I0818 19:02:38.736699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kg6jv" node="ha-373000-m04"
	E0818 19:02:38.759152       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6g6fs\": pod kindnet-6g6fs is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6g6fs" node="ha-373000-m04"
	E0818 19:02:38.759208       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ccae47d9-4f47-4a8e-9ff1-9c3acf42d3cb(kube-system/kindnet-6g6fs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6g6fs"
	E0818 19:02:38.759220       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6g6fs\": pod kindnet-6g6fs is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-6g6fs"
	I0818 19:02:38.759449       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6g6fs" node="ha-373000-m04"
	E0818 19:04:24.272540       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.379949    1579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224201eecaa62b4ed09e6764c91ef4dc" path="/var/lib/kubelet/pods/224201eecaa62b4ed09e6764c91ef4dc/volumes"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.615140    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb4ed9664dda977cc9b021fafae44e8ee00272a594ba9ddcb993b4d0d5f0db6f"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.759496    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1f2fb60f7c58ea2a794ed7b3890a722b7e02d695c8b7d8be84e17d817f22ff"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.771290    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3772c138aa65e84e76733835788a3b5c8c0f94bde29eaad82c89e1b944ad3bff"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.799548    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfce6a3dd1783a1665494aa3c9f1676c1fd42788d0dfa87d2196b81b8622522e"
	Aug 18 19:05:50 ha-373000 kubelet[1579]: E0818 19:05:50.390879    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:05:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:05:50 ha-373000 kubelet[1579]: I0818 19:05:50.412291    1579 scope.go:117] "RemoveContainer" containerID="f806e8fda7ac0424ec5809ee1d3490000910e1bcde902d636000fbe7c1a0ad14"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: I0818 19:06:13.151959    1579 scope.go:117] "RemoveContainer" containerID="6ea2d724255aeefc72019808f3a7cf3353706c1aaf09c7f80d3aa13d2a2db8b7"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: I0818 19:06:13.152170    1579 scope.go:117] "RemoveContainer" containerID="b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: E0818 19:06:13.152254    1579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa9c6f5d-6c1e-4901-83bb-62bc420ea044)\"" pod="kube-system/storage-provisioner" podUID="aa9c6f5d-6c1e-4901-83bb-62bc420ea044"
	Aug 18 19:06:27 ha-373000 kubelet[1579]: I0818 19:06:27.368674    1579 scope.go:117] "RemoveContainer" containerID="b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b"
	Aug 18 19:06:50 ha-373000 kubelet[1579]: E0818 19:06:50.388174    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:06:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:07:50 ha-373000 kubelet[1579]: E0818 19:07:50.390057    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:07:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-373000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 node delete m03 -v=7 --alsologtostderr: (6.89761829s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr: exit status 2 (335.522636ms)

                                                
                                                
-- stdout --
	ha-373000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-373000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-373000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:08:30.588298    3928 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:08:30.588598    3928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:08:30.588603    3928 out.go:358] Setting ErrFile to fd 2...
	I0818 12:08:30.588607    3928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:08:30.588774    3928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:08:30.588973    3928 out.go:352] Setting JSON to false
	I0818 12:08:30.588996    3928 mustload.go:65] Loading cluster: ha-373000
	I0818 12:08:30.589034    3928 notify.go:220] Checking for updates...
	I0818 12:08:30.589342    3928 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:08:30.589357    3928 status.go:255] checking status of ha-373000 ...
	I0818 12:08:30.589727    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.589769    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.598606    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51936
	I0818 12:08:30.598937    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.599349    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.599357    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.599585    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.599699    3928 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:08:30.599787    3928 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:08:30.599864    3928 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:08:30.600840    3928 status.go:330] ha-373000 host status = "Running" (err=<nil>)
	I0818 12:08:30.600859    3928 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:08:30.601083    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.601101    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.609533    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51938
	I0818 12:08:30.609881    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.610263    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.610283    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.610501    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.610616    3928 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:08:30.610689    3928 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:08:30.610937    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.610966    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.622377    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51940
	I0818 12:08:30.622736    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.623055    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.623071    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.623268    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.623383    3928 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:08:30.623544    3928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:08:30.623568    3928 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:08:30.623652    3928 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:08:30.623743    3928 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:08:30.623842    3928 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:08:30.623933    3928 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:08:30.656176    3928 ssh_runner.go:195] Run: systemctl --version
	I0818 12:08:30.660852    3928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:08:30.672526    3928 kubeconfig.go:125] found "ha-373000" server: "https://192.169.0.254:8443"
	I0818 12:08:30.672551    3928 api_server.go:166] Checking apiserver status ...
	I0818 12:08:30.672596    3928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:08:30.683843    3928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup
	W0818 12:08:30.691683    3928 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1978/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:08:30.691730    3928 ssh_runner.go:195] Run: ls
	I0818 12:08:30.694852    3928 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0818 12:08:30.697859    3928 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0818 12:08:30.697870    3928 status.go:422] ha-373000 apiserver status = Running (err=<nil>)
	I0818 12:08:30.697879    3928 status.go:257] ha-373000 status: &{Name:ha-373000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:08:30.697890    3928 status.go:255] checking status of ha-373000-m02 ...
	I0818 12:08:30.698153    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.698196    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.707088    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51944
	I0818 12:08:30.707428    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.707769    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.707784    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.708003    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.708109    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:08:30.708179    3928 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:08:30.708267    3928 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:08:30.709232    3928 status.go:330] ha-373000-m02 host status = "Running" (err=<nil>)
	I0818 12:08:30.709243    3928 host.go:66] Checking if "ha-373000-m02" exists ...
	I0818 12:08:30.709505    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.709527    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.718393    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51946
	I0818 12:08:30.718756    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.719080    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.719098    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.719294    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.719410    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:08:30.719492    3928 host.go:66] Checking if "ha-373000-m02" exists ...
	I0818 12:08:30.719764    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.719795    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.728673    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0818 12:08:30.729037    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.729400    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.729415    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.729650    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.729761    3928 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:08:30.729905    3928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:08:30.729917    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:08:30.730008    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:08:30.730090    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:08:30.730185    3928 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:08:30.730264    3928 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:08:30.759457    3928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:08:30.770612    3928 kubeconfig.go:125] found "ha-373000" server: "https://192.169.0.254:8443"
	I0818 12:08:30.770626    3928 api_server.go:166] Checking apiserver status ...
	I0818 12:08:30.770665    3928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:08:30.781090    3928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2133/cgroup
	W0818 12:08:30.788426    3928 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2133/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:08:30.788466    3928 ssh_runner.go:195] Run: ls
	I0818 12:08:30.791798    3928 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0818 12:08:30.794877    3928 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0818 12:08:30.794889    3928 status.go:422] ha-373000-m02 apiserver status = Running (err=<nil>)
	I0818 12:08:30.794897    3928 status.go:257] ha-373000-m02 status: &{Name:ha-373000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:08:30.794907    3928 status.go:255] checking status of ha-373000-m04 ...
	I0818 12:08:30.795171    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.795191    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.803695    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0818 12:08:30.804017    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.804372    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.804388    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.804563    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.804690    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:08:30.804762    3928 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:08:30.804849    3928 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3877
	I0818 12:08:30.805786    3928 status.go:330] ha-373000-m04 host status = "Running" (err=<nil>)
	I0818 12:08:30.805795    3928 host.go:66] Checking if "ha-373000-m04" exists ...
	I0818 12:08:30.806055    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.806085    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.814533    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51954
	I0818 12:08:30.814879    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.815184    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.815203    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.815423    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.815534    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:08:30.815623    3928 host.go:66] Checking if "ha-373000-m04" exists ...
	I0818 12:08:30.815890    3928 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:08:30.815930    3928 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:08:30.824345    3928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51956
	I0818 12:08:30.824675    3928 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:08:30.824979    3928 main.go:141] libmachine: Using API Version  1
	I0818 12:08:30.824990    3928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:08:30.825207    3928 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:08:30.825313    3928 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:08:30.825436    3928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:08:30.825446    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:08:30.825526    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:08:30.825596    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:08:30.825676    3928 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:08:30.825760    3928 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:08:30.855825    3928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:08:30.866463    3928 status.go:257] ha-373000-m04 status: &{Name:ha-373000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (3.486404294s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	| node    | ha-373000 node delete m03 -v=7                                                                                               | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:04:31
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:04:31.983272    3824 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:04:31.983454    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983459    3824 out.go:358] Setting ErrFile to fd 2...
	I0818 12:04:31.983463    3824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:04:31.983623    3824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:04:31.985167    3824 out.go:352] Setting JSON to false
	I0818 12:04:32.009018    3824 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2042,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:04:32.009111    3824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:04:32.030819    3824 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:04:32.074529    3824 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:04:32.074586    3824 notify.go:220] Checking for updates...
	I0818 12:04:32.118375    3824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:32.139430    3824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:04:32.160729    3824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:04:32.182618    3824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:04:32.204484    3824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:04:32.226364    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:32.226552    3824 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:04:32.227242    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.227322    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.236867    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51772
	I0818 12:04:32.237225    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.237659    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.237676    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.237931    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.238060    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.267813    3824 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:04:32.289474    3824 start.go:297] selected driver: hyperkit
	I0818 12:04:32.289504    3824 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.289713    3824 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:04:32.289908    3824 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.290109    3824 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:04:32.300191    3824 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:04:32.305600    3824 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.305625    3824 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:04:32.309104    3824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:04:32.309145    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:32.309152    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:32.309217    3824 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:32.309317    3824 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:04:32.358744    3824 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:04:32.379125    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:32.379197    3824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:04:32.379221    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:32.379454    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:32.379473    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:32.379655    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.380668    3824 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:32.380793    3824 start.go:364] duration metric: took 98.513µs to acquireMachinesLock for "ha-373000"
	I0818 12:04:32.380830    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:32.380850    3824 fix.go:54] fixHost starting: 
	I0818 12:04:32.381275    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:32.381305    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:32.390300    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51774
	I0818 12:04:32.390644    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:32.390984    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:32.390995    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:32.391207    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:32.391330    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.391423    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:04:32.391500    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.391596    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 2975
	I0818 12:04:32.392493    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.392518    3824 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:04:32.392535    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:04:32.392619    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:32.435089    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:04:32.455966    3824 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:04:32.456397    3824 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:04:32.456421    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.458400    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 2975 missing from process table
	I0818 12:04:32.458413    3824 main.go:141] libmachine: (ha-373000) DBG | pid 2975 is in state "Stopped"
	I0818 12:04:32.458431    3824 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:04:32.458650    3824 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:04:32.582503    3824 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:04:32.582527    3824 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:32.582675    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582701    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d230)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:32.582750    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:32.582797    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:32.582809    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:32.584342    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 DEBUG: hyperkit: Pid is 3836
	I0818 12:04:32.584802    3824 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:04:32.584828    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:32.584904    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:04:32.586608    3824 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:04:32.586694    3824 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:32.586716    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:32.586736    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:32.586754    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:04:32.586763    3824 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c395f4}
	I0818 12:04:32.586768    3824 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:04:32.586791    3824 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:04:32.586800    3824 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:04:32.587439    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:32.587606    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:32.588031    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:32.588043    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:32.588201    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:32.588339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:32.588463    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:32.588712    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:32.588878    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:32.589128    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:32.589140    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:32.592359    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:32.649659    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:32.650386    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:32.650405    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:32.650422    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:32.650441    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.028577    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:33.028592    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:33.143700    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:33.143730    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:33.143746    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:33.143773    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:33.144665    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:33.144677    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:38.692844    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:38.692980    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:38.692989    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:38.717966    3824 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:04:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:04:43.657661    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:04:43.657675    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657817    3824 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:04:43.657829    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.657947    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.658033    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.658131    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658218    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.658320    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.658446    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.658583    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.658592    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:04:43.726337    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:04:43.726356    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.726492    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.726602    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726701    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.726793    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.726914    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.727062    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.727073    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:04:43.791204    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:04:43.791222    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:04:43.791240    3824 buildroot.go:174] setting up certificates
	I0818 12:04:43.791251    3824 provision.go:84] configureAuth start
	I0818 12:04:43.791258    3824 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:04:43.791389    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:43.791486    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.791580    3824 provision.go:143] copyHostCerts
	I0818 12:04:43.791612    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791682    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:04:43.791691    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:04:43.791831    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:04:43.792037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792077    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:04:43.792082    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:04:43.792161    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:04:43.792314    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792360    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:04:43.792365    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:04:43.792438    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:04:43.792585    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:04:43.849995    3824 provision.go:177] copyRemoteCerts
	I0818 12:04:43.850046    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:04:43.850064    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.850180    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.850277    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.850383    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.850475    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:43.887087    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:04:43.887163    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:04:43.906588    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:04:43.906643    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:04:43.926387    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:04:43.926447    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:04:43.945959    3824 provision.go:87] duration metric: took 154.69571ms to configureAuth
	I0818 12:04:43.945972    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:04:43.946140    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:43.946153    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:43.946287    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:43.946379    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:43.946466    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946557    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:43.946656    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:43.946772    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:43.946901    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:43.946910    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:04:44.005207    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:04:44.005222    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:04:44.005300    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:04:44.005312    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.005446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.005534    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005629    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.005730    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.005877    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.006020    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.006065    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:04:44.073819    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:04:44.073841    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:44.073984    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:44.074098    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074187    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:44.074268    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:44.074392    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:44.074539    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:44.074553    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:04:45.741799    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:04:45.741813    3824 machine.go:96] duration metric: took 13.154182627s to provisionDockerMachine
	I0818 12:04:45.741824    3824 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:04:45.741833    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:04:45.741844    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.742025    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:04:45.742046    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.742143    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.742239    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.742328    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.742403    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.779742    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:04:45.785976    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:04:45.785994    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:04:45.786100    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:04:45.786286    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:04:45.786293    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:04:45.786507    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:04:45.795153    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:45.825008    3824 start.go:296] duration metric: took 83.165524ms for postStartSetup
	I0818 12:04:45.825032    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.825216    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:04:45.825229    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.825330    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.825446    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.825536    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.825609    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.861497    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:04:45.861553    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:04:45.913975    3824 fix.go:56] duration metric: took 13.533549329s for fixHost
	I0818 12:04:45.914000    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.914142    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.914243    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914335    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.914429    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.914562    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:45.914716    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:04:45.914724    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:04:45.972708    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007885.983977698
	
	I0818 12:04:45.972721    3824 fix.go:216] guest clock: 1724007885.983977698
	I0818 12:04:45.972726    3824 fix.go:229] Guest: 2024-08-18 12:04:45.983977698 -0700 PDT Remote: 2024-08-18 12:04:45.913989 -0700 PDT m=+13.967759099 (delta=69.988698ms)
	I0818 12:04:45.972744    3824 fix.go:200] guest clock delta is within tolerance: 69.988698ms
	I0818 12:04:45.972748    3824 start.go:83] releasing machines lock for "ha-373000", held for 13.592366774s
	I0818 12:04:45.972769    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.972898    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:45.973002    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973353    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973448    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:04:45.973532    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:04:45.973568    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973602    3824 ssh_runner.go:195] Run: cat /version.json
	I0818 12:04:45.973622    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:04:45.973654    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973709    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:04:45.973731    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973791    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:04:45.973819    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973885    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:45.973899    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:04:45.973975    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:04:46.010017    3824 ssh_runner.go:195] Run: systemctl --version
	I0818 12:04:46.068668    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:04:46.073848    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:04:46.073896    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:04:46.088665    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:04:46.088678    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.088793    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.104594    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:04:46.113505    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:04:46.122459    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.122502    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:04:46.131401    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.140195    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:04:46.148984    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:04:46.157732    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:04:46.166637    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:04:46.175587    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:04:46.184399    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:04:46.193294    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:04:46.201351    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:04:46.209432    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.307330    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:04:46.326804    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:04:46.326886    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:04:46.339615    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.350592    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:04:46.370916    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:04:46.381030    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.391260    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:04:46.416547    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:04:46.426851    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:04:46.442033    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:04:46.444975    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:04:46.453011    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:04:46.466482    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:04:46.579328    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:04:46.679794    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:04:46.679875    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:04:46.693907    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:46.791012    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:04:49.093057    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.302096527s)
	I0818 12:04:49.093136    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:04:49.103320    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:04:49.115838    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.126241    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:04:49.218487    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:04:49.318047    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.424425    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:04:49.438128    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:04:49.449061    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.547962    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:04:49.611460    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:04:49.611544    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:04:49.616359    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:04:49.616414    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:04:49.620236    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:04:49.646389    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:04:49.646459    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.664790    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:04:49.705551    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:04:49.705601    3824 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:04:49.706071    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:04:49.710649    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.720358    3824 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:04:49.720454    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:49.720509    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.733920    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.733938    3824 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:04:49.734009    3824 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:04:49.747065    3824 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:04:49.747084    3824 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:04:49.747099    3824 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:04:49.747179    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:04:49.747253    3824 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:04:49.785583    3824 cni.go:84] Creating CNI manager for ""
	I0818 12:04:49.785600    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 12:04:49.785611    3824 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:04:49.785627    3824 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:04:49.785710    3824 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:04:49.785725    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:04:49.785779    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:04:49.798283    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:04:49.798356    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:04:49.798405    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:04:49.807035    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:04:49.807081    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:04:49.814327    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:04:49.827868    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:04:49.841383    3824 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:04:49.855255    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:04:49.868811    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:04:49.871686    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:04:49.880822    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:04:49.979755    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:04:49.993936    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:04:49.993948    3824 certs.go:194] generating shared ca certs ...
	I0818 12:04:49.993960    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:49.994155    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:04:49.994224    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:04:49.994234    3824 certs.go:256] generating profile certs ...
	I0818 12:04:49.994338    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:04:49.994359    3824 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d
	I0818 12:04:49.994377    3824 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0818 12:04:50.091613    3824 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d ...
	I0818 12:04:50.091630    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d: {Name:mkea55c8a03a32b3ce24aa90dfb71f1f97bc2354 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092214    3824 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d ...
	I0818 12:04:50.092225    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d: {Name:mkcfe2a6c64cb35ce66e627cea270e19236eac55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.092457    3824 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:04:50.092702    3824 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.01b2710d -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:04:50.092980    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:04:50.092991    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:04:50.093016    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:04:50.093037    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:04:50.093056    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:04:50.093084    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:04:50.093110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:04:50.093130    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:04:50.093151    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:04:50.093255    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:04:50.093309    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:04:50.093320    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:04:50.093368    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:04:50.093405    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:04:50.093439    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:04:50.093508    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:04:50.093540    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.093561    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.093579    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.094042    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:04:50.115280    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:04:50.139151    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:04:50.164514    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:04:50.185623    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:04:50.205278    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:04:50.227215    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:04:50.252699    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:04:50.287877    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:04:50.314703    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:04:50.362716    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:04:50.396868    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:04:50.413037    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:04:50.417460    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:04:50.427101    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430627    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.430663    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:04:50.436239    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:04:50.445438    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:04:50.454433    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458262    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.458306    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:04:50.462517    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:04:50.471554    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:04:50.480511    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483892    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.483930    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:04:50.488142    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:04:50.497129    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:04:50.500599    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:04:50.505066    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:04:50.509424    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:04:50.513887    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:04:50.518263    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:04:50.522558    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:04:50.526858    3824 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:04:50.526981    3824 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:04:50.544620    3824 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:04:50.553037    3824 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:04:50.553052    3824 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:04:50.553092    3824 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:04:50.561771    3824 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:04:50.562091    3824 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562172    3824 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:04:50.562375    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.562752    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.562947    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:04:50.563273    3824 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:04:50.563454    3824 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:04:50.571351    3824 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:04:50.571368    3824 kubeadm.go:597] duration metric: took 18.311426ms to restartPrimaryControlPlane
	I0818 12:04:50.571374    3824 kubeadm.go:394] duration metric: took 44.525606ms to StartCluster
	I0818 12:04:50.571381    3824 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.571461    3824 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:04:50.571852    3824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:04:50.572070    3824 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:04:50.572083    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:04:50.572098    3824 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:04:50.572212    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.614034    3824 out.go:177] * Enabled addons: 
	I0818 12:04:50.635950    3824 addons.go:510] duration metric: took 63.86135ms for enable addons: enabled=[]
	I0818 12:04:50.635988    3824 start.go:246] waiting for cluster config update ...
	I0818 12:04:50.636000    3824 start.go:255] writing updated cluster config ...
	I0818 12:04:50.657675    3824 out.go:201] 
	I0818 12:04:50.679473    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:04:50.679623    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.701920    3824 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:04:50.743977    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:04:50.744059    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:04:50.744255    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:04:50.744273    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:04:50.744402    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.745331    3824 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:04:50.745437    3824 start.go:364] duration metric: took 80.166µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:04:50.745464    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:04:50.745472    3824 fix.go:54] fixHost starting: m02
	I0818 12:04:50.745909    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:04:50.745945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:04:50.754990    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51796
	I0818 12:04:50.755371    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:04:50.755727    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:04:50.755746    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:04:50.755953    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:04:50.756082    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.756178    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:04:50.756271    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.756346    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3777
	I0818 12:04:50.757254    3824 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:04:50.757265    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.757267    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	W0818 12:04:50.757351    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:04:50.798825    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:04:50.819905    3824 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:04:50.820210    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.820266    3824 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:04:50.822018    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3777 missing from process table
	I0818 12:04:50.822032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3777 is in state "Stopped"
	I0818 12:04:50.822050    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:04:50.822421    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:04:50.852069    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:04:50.852091    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:04:50.852254    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852282    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b05a0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:04:50.852317    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:04:50.852367    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:04:50.852388    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:04:50.854019    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 DEBUG: hyperkit: Pid is 3847
	I0818 12:04:50.854499    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:04:50.854512    3824 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:04:50.854595    3824 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:04:50.856201    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:04:50.856261    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:04:50.856275    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:04:50.856297    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:04:50.856304    3824 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c39707}
	I0818 12:04:50.856311    3824 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:04:50.856314    3824 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:04:50.856368    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:04:50.857036    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:04:50.857215    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:04:50.857753    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:04:50.857763    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:04:50.857876    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:04:50.857972    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:04:50.858077    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858182    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:04:50.858287    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:04:50.858439    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:04:50.858605    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:04:50.858614    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:04:50.862106    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:04:50.873418    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:04:50.874484    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:50.874508    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:50.874528    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:50.874540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.253540    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:04:51.253561    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:04:51.368118    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:04:51.368138    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:04:51.368149    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:04:51.368159    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:04:51.369027    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:04:51.369038    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:51 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:04:56.941257    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:04:56.941321    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:04:56.941358    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:04:56.965032    3824 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:04:56 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:05:01.918754    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:05:01.918770    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918896    3824 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:05:01.918915    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:01.918996    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.919079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.919189    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919273    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.919370    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.919490    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.919633    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.919642    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:05:01.981031    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:05:01.981046    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:01.981170    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:01.981268    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981355    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:01.981446    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:01.981583    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:01.981738    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:01.981752    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:05:02.039473    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:05:02.039493    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:05:02.039504    3824 buildroot.go:174] setting up certificates
	I0818 12:05:02.039510    3824 provision.go:84] configureAuth start
	I0818 12:05:02.039517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:05:02.039649    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:02.039751    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.039832    3824 provision.go:143] copyHostCerts
	I0818 12:05:02.039860    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.039907    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:05:02.039913    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:05:02.040392    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:05:02.041069    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041173    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:05:02.041189    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:05:02.041355    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:05:02.041829    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041870    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:05:02.041876    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:05:02.041968    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:05:02.042135    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:05:02.193741    3824 provision.go:177] copyRemoteCerts
	I0818 12:05:02.193788    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:05:02.193804    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.193945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.194042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.194125    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.194199    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:02.226432    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:05:02.226499    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:05:02.246061    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:05:02.246122    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:05:02.265998    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:05:02.266073    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:05:02.285864    3824 provision.go:87] duration metric: took 246.348312ms to configureAuth
	I0818 12:05:02.285879    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:05:02.286050    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:02.286079    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:02.286213    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.286301    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.286392    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286472    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.286545    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.286668    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.286804    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.286812    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:05:02.339893    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:05:02.339911    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:05:02.340004    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:05:02.340042    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.340176    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.340315    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340406    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.340501    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.340623    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.340773    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.340820    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:05:02.404178    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:05:02.404194    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:02.404309    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:02.404408    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404497    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:02.404595    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:02.404726    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:02.404863    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:02.404877    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:05:04.075470    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:05:04.075484    3824 machine.go:96] duration metric: took 13.218134296s to provisionDockerMachine
	I0818 12:05:04.075493    3824 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:05:04.075501    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:05:04.075511    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.075694    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:05:04.075707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.075834    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.075939    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.076037    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.076115    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.108768    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:05:04.113829    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:05:04.113843    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:05:04.113949    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:05:04.114103    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:05:04.114110    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:05:04.114276    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:05:04.124928    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:04.155494    3824 start.go:296] duration metric: took 79.994023ms for postStartSetup
	I0818 12:05:04.155517    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.155701    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:05:04.155714    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.155817    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.155914    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.156017    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.156111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.189027    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:05:04.189092    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:05:04.242339    3824 fix.go:56] duration metric: took 13.497284645s for fixHost
	I0818 12:05:04.242364    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.242535    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.242652    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242756    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.242854    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.242979    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:04.243122    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:05:04.243130    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:05:04.296405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007904.452858156
	
	I0818 12:05:04.296418    3824 fix.go:216] guest clock: 1724007904.452858156
	I0818 12:05:04.296424    3824 fix.go:229] Guest: 2024-08-18 12:05:04.452858156 -0700 PDT Remote: 2024-08-18 12:05:04.242354 -0700 PDT m=+32.296694535 (delta=210.504156ms)
	I0818 12:05:04.296434    3824 fix.go:200] guest clock delta is within tolerance: 210.504156ms
	I0818 12:05:04.296438    3824 start.go:83] releasing machines lock for "ha-373000-m02", held for 13.551411847s
	I0818 12:05:04.296457    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.296586    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:04.320113    3824 out.go:177] * Found network options:
	I0818 12:05:04.341094    3824 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:05:04.362987    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.363034    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.363842    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364116    3824 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:05:04.364240    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:05:04.364290    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:05:04.364348    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:05:04.364447    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:05:04.364491    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364510    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:05:04.364707    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.364754    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:05:04.364945    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.364990    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:05:04.365178    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:05:04.365196    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:05:04.365310    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:05:04.393978    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:05:04.394044    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:05:04.444626    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:05:04.444648    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.444788    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.460942    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:05:04.470007    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:05:04.479404    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:05:04.479474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:05:04.488768    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.497773    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:05:04.506562    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:05:04.515469    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:05:04.524688    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:05:04.533764    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:05:04.542630    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:05:04.551641    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:05:04.559747    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:05:04.568155    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:04.661227    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:05:04.678789    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:05:04.678856    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:05:04.693121    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.704334    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:05:04.718489    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:05:04.731628    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.741778    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:05:04.765854    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:05:04.776545    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:05:04.792787    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:05:04.795674    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:05:04.802688    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:05:04.816018    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:05:04.913547    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:05:05.026765    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:05:05.026795    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:05:05.040598    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:05.134191    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:05:07.482472    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.348334544s)
	I0818 12:05:07.482540    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:05:07.493839    3824 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:05:07.506964    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.517252    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:05:07.612993    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:05:07.715979    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.829879    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:05:07.843247    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:05:07.854199    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:07.948839    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:05:08.015240    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:05:08.015316    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:05:08.020551    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:05:08.020605    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:05:08.024481    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:05:08.049504    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:05:08.049590    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.068921    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:05:08.108445    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:05:08.150167    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:05:08.171157    3824 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:05:08.171639    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:05:08.176186    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.185534    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:05:08.185713    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.185923    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.185945    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.194524    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51818
	I0818 12:05:08.194866    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.195227    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.195244    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.195441    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.195542    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:05:08.195619    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:08.195696    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:05:08.196597    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:05:08.196853    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:08.196874    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:08.205321    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51820
	I0818 12:05:08.205651    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:08.205991    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:08.206003    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:08.206254    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:08.206377    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:05:08.206469    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.6
	I0818 12:05:08.206476    3824 certs.go:194] generating shared ca certs ...
	I0818 12:05:08.206495    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:05:08.206643    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:05:08.206701    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:05:08.206711    3824 certs.go:256] generating profile certs ...
	I0818 12:05:08.206803    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:05:08.206887    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.238ba961
	I0818 12:05:08.206947    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:05:08.206955    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:05:08.206976    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:05:08.206995    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:05:08.207013    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:05:08.207030    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:05:08.207058    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:05:08.207082    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:05:08.207100    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:05:08.207176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:05:08.207217    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:05:08.207233    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:05:08.207270    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:05:08.207305    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:05:08.207341    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:05:08.207407    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:05:08.207441    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.207462    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.207480    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.207506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:05:08.207592    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:05:08.207678    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:05:08.207761    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:05:08.207840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:05:08.236538    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:05:08.239929    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:05:08.248132    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:05:08.251185    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:05:08.259155    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:05:08.262371    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:05:08.270151    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:05:08.273887    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:05:08.282487    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:05:08.285536    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:05:08.293364    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:05:08.296397    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:05:08.304405    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:05:08.324774    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:05:08.344299    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:05:08.364160    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:05:08.384209    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:05:08.403922    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:05:08.423745    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:05:08.443381    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:05:08.463375    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:05:08.483664    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:05:08.503661    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:05:08.523065    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:05:08.536313    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:05:08.550006    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:05:08.563497    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:05:08.577251    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:05:08.590803    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:05:08.604390    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:05:08.618111    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:05:08.622218    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:05:08.630462    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633848    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.633898    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:05:08.638082    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:05:08.646091    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:05:08.654220    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657554    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.657600    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:05:08.661803    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:05:08.669959    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:05:08.678394    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681807    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.681847    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:05:08.685950    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:05:08.694130    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:05:08.697586    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:05:08.701969    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:05:08.706279    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:05:08.710463    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:05:08.714641    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:05:08.718883    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:05:08.723008    3824 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.31.0 docker true true} ...
	I0818 12:05:08.723074    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:05:08.723091    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:05:08.723120    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:05:08.734860    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:05:08.734897    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:05:08.734943    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:05:08.742519    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:05:08.742560    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:05:08.749712    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:05:08.763219    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:05:08.776984    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:05:08.790534    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:05:08.793387    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:05:08.802777    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:08.900049    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:08.914678    3824 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:05:08.914870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:08.935865    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:05:08.977759    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:05:09.099141    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:05:09.111487    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:05:09.111691    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:05:09.111727    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:05:09.111887    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:09.111971    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:09.111976    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:09.111984    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:09.111988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.486764    3824 round_trippers.go:574] Response Status: 200 OK in 8375 milliseconds
	I0818 12:05:17.489585    3824 node_ready.go:49] node "ha-373000-m02" has status "Ready":"True"
	I0818 12:05:17.489601    3824 node_ready.go:38] duration metric: took 8.377957809s for node "ha-373000-m02" to be "Ready" ...
	I0818 12:05:17.489608    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:17.489646    3824 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 12:05:17.489661    3824 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 12:05:17.489699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:17.489704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.489710    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.489715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.530230    3824 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0818 12:05:17.537636    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.537709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:05:17.537723    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.537734    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.537739    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.557447    3824 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0818 12:05:17.557935    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.557944    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.557953    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.557959    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.560556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.560923    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.560933    3824 pod_ready.go:82] duration metric: took 23.281295ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560940    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.560984    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:05:17.560989    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.560995    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.560998    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.564580    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.565125    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.565134    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.565139    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.565163    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.569356    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:17.569742    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.569751    3824 pod_ready.go:82] duration metric: took 8.807255ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569758    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.569797    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:05:17.569803    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.569809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.569812    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.574840    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.575184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:17.575192    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.575199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.575202    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.578378    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:17.578782    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.578792    3824 pod_ready.go:82] duration metric: took 9.028915ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578799    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.578838    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:05:17.578843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.578849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.578854    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.580930    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.581338    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:17.581345    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.581351    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.581356    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.583546    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.584029    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.584039    3824 pod_ready.go:82] duration metric: took 5.23429ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584046    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.584081    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:05:17.584087    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.584092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.584102    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.586354    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:17.690238    3824 request.go:632] Waited for 103.365151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:17.690294    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.690299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.690305    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.696245    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:17.696879    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:17.696890    3824 pod_ready.go:82] duration metric: took 112.842369ms for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.696903    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:17.889742    3824 request.go:632] Waited for 192.805887ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889790    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:05:17.889813    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:17.889819    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:17.889825    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:17.985037    3824 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0818 12:05:18.089860    3824 request.go:632] Waited for 104.39101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089903    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:18.089927    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.089935    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.089944    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.093863    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.094247    3824 pod_ready.go:98] node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094258    3824 pod_ready.go:82] duration metric: took 397.361513ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	E0818 12:05:18.094264    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000" hosting pod "kube-apiserver-ha-373000" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000" has status "Ready":"False"
	I0818 12:05:18.094272    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:18.289789    3824 request.go:632] Waited for 195.476866ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289877    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.289885    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.289892    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.289896    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.292952    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.489842    3824 request.go:632] Waited for 196.327806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.489917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.489923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.489927    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.494638    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:18.690780    3824 request.go:632] Waited for 96.165189ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690864    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:18.690871    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.690878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.690883    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.694201    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:18.890381    3824 request.go:632] Waited for 195.63212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890423    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:18.890429    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:18.890458    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:18.890462    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:18.893043    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.095616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.095638    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.095645    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.095649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.097986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.290759    3824 request.go:632] Waited for 192.087215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290839    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.290847    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.290853    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.290860    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.293249    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.594823    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:19.594840    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.594847    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.594850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.597610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:19.690481    3824 request.go:632] Waited for 92.316894ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690550    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:19.690558    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:19.690564    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:19.690568    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:19.694901    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:20.095867    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.095894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.095905    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.095910    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.099922    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:20.100437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.100445    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.100451    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.100455    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.102106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:20.102474    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:20.595432    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:20.595453    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.595462    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.595466    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.597863    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:20.598227    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:20.598234    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:20.598240    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:20.598244    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:20.600061    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.094536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.094563    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.094572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.094577    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.097999    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:21.098519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.098527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.098533    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.098537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.100015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:21.595468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:21.595500    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.595514    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.595523    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.601631    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:21.601997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:21.602004    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:21.602010    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:21.602017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:21.605192    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.094552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.094567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.094574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.094577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.096991    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.097657    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.097665    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.097671    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.097675    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.099680    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:22.595859    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:22.595888    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.595900    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.595906    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.599261    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:22.599791    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:22.599802    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:22.599810    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:22.599816    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:22.602572    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:22.602966    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:23.096362    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.096389    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.096401    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.096407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.100039    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.100588    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.100596    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.100601    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.100605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.102265    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:23.595179    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:23.595208    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.595221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.595229    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.598872    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:23.599421    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:23.599444    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:23.599450    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:23.599452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:23.601013    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.095296    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.095327    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.095339    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.095344    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.099211    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.099655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.099662    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.099668    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.099671    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.101457    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:24.595373    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:24.595395    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.595406    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.595412    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.599194    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:24.599738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:24.599748    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:24.599754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:24.599758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:24.601701    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.094729    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.094756    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.094765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.094770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.098009    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.098599    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.098609    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.098617    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.098622    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.100470    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:25.100761    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:25.594953    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:25.594981    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.594993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.595002    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.598801    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:25.599323    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:25.599331    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:25.599337    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:25.599340    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:25.601145    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.094462    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.094491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.094502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.094508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.098279    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.098847    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.098857    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.098865    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.098869    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.100368    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:26.596309    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:26.596379    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.596394    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.596402    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.600128    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:26.600593    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:26.600601    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:26.600607    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:26.600613    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:26.602191    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.095574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.095602    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.095613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.095619    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.099557    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.100033    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.100043    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.100050    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.100075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.101821    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:27.102055    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:27.594913    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:27.594967    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.594980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.594986    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598307    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:27.598905    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:27.598915    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:27.598923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:27.598937    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:27.600697    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.095806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.095836    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.095880    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.095892    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.099409    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.099885    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.099894    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.099904    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.099909    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.101420    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:28.594673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:28.594699    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.594710    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.594716    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.598247    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:28.599059    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:28.599066    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:28.599071    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:28.599074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:28.600807    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.095468    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.095495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.095506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.095515    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.099742    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:29.100208    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.100215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.100221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.100224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.101920    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:29.102352    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:29.595041    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:29.595067    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.595079    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.595086    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.598712    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:29.599364    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:29.599372    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:29.599378    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:29.599384    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:29.601219    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.094218    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.094243    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.094255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.094262    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.097685    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.098375    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.098384    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.098390    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.098393    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.099950    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:30.594415    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:30.594441    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.594453    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.594461    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.597799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:30.598380    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:30.598391    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:30.598399    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:30.598407    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:30.600100    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.095000    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.095037    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.095081    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.095091    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.098989    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:31.099523    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.099535    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.099543    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.099565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.101114    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:31.596112    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:31.596139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.596151    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.596156    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601060    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:31.601464    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:31.601473    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:31.601478    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:31.601482    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:31.608084    3824 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 12:05:31.608636    3824 pod_ready.go:103] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:32.094503    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.094530    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.094541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.094556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.098239    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.099234    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.099247    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.099255    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.099260    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.101138    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.594723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:05:32.594751    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.594795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.594802    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.598658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:32.599491    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:32.599499    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.599505    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.599508    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.601334    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.601711    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.601720    3824 pod_ready.go:82] duration metric: took 14.507895611s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601726    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.601761    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:05:32.601766    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.601772    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.601777    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.603708    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:32.604204    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:32.604212    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.604218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.604222    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.606340    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.606652    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:32.606661    3824 pod_ready.go:82] duration metric: took 4.92937ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606674    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:32.606703    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:32.606708    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.606713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.606717    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.609503    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:32.609918    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:32.609926    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:32.609931    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:32.609935    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:32.611839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.108118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.108139    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.108150    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.108155    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.111861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:33.112554    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.112561    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.112567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.112570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.114401    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:33.608245    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:33.608285    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.608296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.608313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.611023    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:33.611446    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:33.611454    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:33.611460    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:33.611463    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:33.614112    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.106924    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.106945    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.106955    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.106961    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.110853    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.111241    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.111248    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.111254    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.111257    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.112969    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:34.606890    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:34.606910    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.606922    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.606934    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.610565    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:34.611180    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:34.611189    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:34.611194    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:34.611199    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:34.613556    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:34.613896    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:35.108933    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.108955    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.108967    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.108975    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:35.113665    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.113676    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.113684    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.113693    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.115446    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:35.607846    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:35.607862    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.607871    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.607875    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.610400    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:35.610817    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:35.610824    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:35.610830    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:35.610834    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:35.613002    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.107806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.107834    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.107845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.107850    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.111350    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:36.112008    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.112016    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.112022    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.112026    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.113688    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:36.607575    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:36.607590    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.607599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.607605    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.610466    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:36.611075    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:36.611084    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:36.611092    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:36.611097    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:36.613213    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.107561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.107587    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.107599    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.107607    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.111699    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:37.112198    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.112206    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.112212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.112215    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.114106    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:37.114461    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:37.606742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:37.606757    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.606765    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.606769    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.609706    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:37.610101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:37.610109    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:37.610115    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:37.610119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:37.612095    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:38.108768    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.108787    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.108799    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.108807    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112123    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:38.112659    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.112670    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.112677    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.112683    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.114718    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.606675    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:38.606689    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.606698    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.606703    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.609037    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:38.609536    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:38.609544    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:38.609549    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:38.609552    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:38.611709    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.107160    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.107184    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.107196    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.107203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.110902    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:39.111438    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.111449    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.111457    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.111464    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.113475    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.606755    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:39.606770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.606778    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.606782    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.609155    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:39.609534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:39.609542    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:39.609548    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:39.609550    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:39.611533    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:39.611812    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:40.107090    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.107116    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.107127    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.107135    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.110428    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:40.110932    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.110939    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.110945    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.110949    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.112726    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:40.607329    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:40.607344    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.607352    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.607358    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.609414    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:40.609793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:40.609800    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:40.609806    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:40.609809    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:40.612006    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.108754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.108777    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.108788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.108794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.112868    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:41.113578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.113585    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.113591    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.113594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.115666    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:41.607779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:41.607794    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.607800    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.607803    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.626429    3824 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0818 12:05:41.626909    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:41.626917    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:41.626923    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:41.626928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:41.638016    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:41.638320    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:42.107843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.107861    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.107874    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.107877    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.125357    3824 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0818 12:05:42.125762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.125770    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.125777    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.125794    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.137025    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:42.606837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:42.606853    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.606859    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.606863    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.631392    3824 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0818 12:05:42.632047    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:42.632055    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:42.632061    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:42.632064    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:42.644074    3824 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0818 12:05:43.106555    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.106567    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.106574    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.106577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.108847    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.109231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.109240    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.109246    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.109249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.111648    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.607253    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:43.607270    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.607276    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.607281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.609519    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:43.610124    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:43.610132    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:43.610138    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:43.610141    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:43.611865    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.106960    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.106982    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.106991    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.106996    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.110958    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:44.111626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.111634    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.111640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.111643    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.113355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:44.113674    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:44.606783    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:44.606795    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.606803    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.606806    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.609512    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:44.609978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:44.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:44.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:44.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:44.612208    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:45.108541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.108568    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.108585    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.108627    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.112710    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:45.113170    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.113180    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.113188    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.113192    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.115093    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.607694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:45.607709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.607715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.607718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.609538    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:45.610190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:45.610198    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:45.610204    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:45.610207    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:45.612007    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.107742    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.107761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.107773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.107781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.111014    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:46.111681    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.111693    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.111701    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.111706    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.113564    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.113901    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:46.607572    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:46.607584    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.607590    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.607594    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.609579    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:46.610284    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:46.610292    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:46.610297    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:46.610300    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:46.611985    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.107288    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.107311    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.107323    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.107328    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.110824    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:47.111541    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.111549    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.111554    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.111557    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.113249    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.606697    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:47.606709    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.606715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.606718    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.608497    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:47.608927    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:47.608936    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:47.608941    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:47.608946    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:47.610440    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.106930    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.106956    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.106968    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.106974    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.110658    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:48.111153    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.111167    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.111170    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.112733    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.606534    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:48.606547    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.606553    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.606556    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.608472    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:48.608894    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:48.608902    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:48.608908    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:48.608913    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:48.611651    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:48.611942    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:49.107605    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.107632    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.107644    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.107650    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.111426    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:49.112028    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.112036    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.112041    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.112043    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.113955    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.607070    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:49.607085    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.607091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.607095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.608755    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:49.609118    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:49.609126    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:49.609132    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:49.609136    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:49.610469    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:50.108393    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.108414    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.108426    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.108432    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.111769    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:50.112262    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.112273    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.112280    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.112284    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.114291    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.606734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:50.606749    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.606755    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.606758    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.608846    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:50.609305    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:50.609313    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:50.609318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:50.609323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:50.610972    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.107143    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.107164    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.107174    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.107180    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.110468    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:51.111149    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.111161    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.111182    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.111186    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.112895    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.113303    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:51.607479    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:51.607491    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.607498    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.607502    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609461    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:51.609979    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:51.609987    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:51.609993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:51.609997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:51.611838    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.106475    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.106495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.106506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.106512    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.110099    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:52.110714    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.110722    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.110728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.110732    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.112418    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.606202    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:52.606215    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.606221    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.606224    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.608174    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:52.608702    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:52.608710    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:52.608716    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:52.608719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:52.610185    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.106308    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.106366    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.106379    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.106387    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.109686    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:53.110263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.110271    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.110277    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.110279    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.111992    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.606611    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:53.606626    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.606632    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.606637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.608462    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.608915    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:53.608923    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:53.608928    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:53.608932    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:53.610639    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:53.611044    3824 pod_ready.go:103] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"False"
	I0818 12:05:54.108224    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:05:54.108251    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.108263    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.108270    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.112154    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.112694    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.112704    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.112715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.112728    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114303    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.114688    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.114698    3824 pod_ready.go:82] duration metric: took 21.508688862s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114704    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.114734    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:05:54.114740    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.114745    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.114749    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.116392    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.116762    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.116769    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.116775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.116779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.118208    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.118583    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.118591    3824 pod_ready.go:82] duration metric: took 3.881464ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118597    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.118626    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:05:54.118631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.118637    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.118639    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.120323    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.120754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.120761    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.120767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.120773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.122312    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.122605    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.122614    3824 pod_ready.go:82] duration metric: took 4.012121ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122620    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.122653    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:05:54.122658    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.122664    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.122668    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.124297    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.124644    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:54.124651    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.124657    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.124661    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.126346    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.126734    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.126744    3824 pod_ready.go:82] duration metric: took 4.119352ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126751    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.126784    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:05:54.126789    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.126795    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.126798    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.128343    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.128709    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:54.128717    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.128722    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.128726    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.130213    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:05:54.130501    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.130510    3824 pod_ready.go:82] duration metric: took 3.754726ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.130516    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.308685    3824 request.go:632] Waited for 178.119131ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308820    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:05:54.308835    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.308860    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.308867    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.312453    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.508339    3824 request.go:632] Waited for 195.466477ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508484    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:54.508495    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.508506    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.508513    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.512283    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.512758    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.512768    3824 pod_ready.go:82] duration metric: took 382.258295ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.512781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.709741    3824 request.go:632] Waited for 196.915457ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709834    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:05:54.709843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.709854    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.709864    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.713388    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.909468    3824 request.go:632] Waited for 195.387253ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909519    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:05:54.909527    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:54.909538    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:54.909546    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:54.912861    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:54.913329    3824 pod_ready.go:93] pod "kube-proxy-l7zlx" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:54.913345    3824 pod_ready.go:82] duration metric: took 400.569828ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:54.913354    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.108201    3824 request.go:632] Waited for 194.795409ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:05:55.108307    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.108318    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.108327    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.112015    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.308912    3824 request.go:632] Waited for 196.31979ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308961    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:05:55.308969    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.308980    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.308988    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.312226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:55.312828    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.312838    3824 pod_ready.go:82] duration metric: took 399.489444ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.312844    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.509991    3824 request.go:632] Waited for 197.064513ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510043    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:05:55.510054    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.510064    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.510071    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.512986    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:05:55.708355    3824 request.go:632] Waited for 194.791144ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708418    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:05:55.708434    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.708472    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.708482    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.712929    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:55.713618    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:55.713628    3824 pod_ready.go:82] duration metric: took 400.791519ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.713635    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:55.908894    3824 request.go:632] Waited for 195.195069ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.908997    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:05:55.909005    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:55.909017    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:55.909027    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:55.913053    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.108627    3824 request.go:632] Waited for 195.198114ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:05:56.108739    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.108753    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.108764    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.112296    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:05:56.112725    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:05:56.112739    3824 pod_ready.go:82] duration metric: took 399.110792ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:05:56.112748    3824 pod_ready.go:39] duration metric: took 38.624333262s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:05:56.112771    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:05:56.112832    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:05:56.125705    3824 api_server.go:72] duration metric: took 47.212470661s to wait for apiserver process to appear ...
	I0818 12:05:56.125716    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:05:56.125733    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:05:56.128805    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:05:56.128837    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:05:56.128843    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.128849    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.128853    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.129433    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:05:56.129522    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:05:56.129534    3824 api_server.go:131] duration metric: took 3.812968ms to wait for apiserver health ...
	I0818 12:05:56.129542    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:05:56.308455    3824 request.go:632] Waited for 178.848504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.308556    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.308568    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.308578    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.314109    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.319517    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:05:56.319538    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319544    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.319550    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.319554    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.319557    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.319560    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.319562    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.319565    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.319567    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.319570    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.319574    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.319577    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.319580    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.319583    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.319586    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.319589    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.319592    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.319595    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.319597    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.319600    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.319602    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.319605    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.319607    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.319610    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.319612    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.319615    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.319618    3824 system_pods.go:74] duration metric: took 190.077141ms to wait for pod list to return data ...
	I0818 12:05:56.319624    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:05:56.509526    3824 request.go:632] Waited for 189.85421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509622    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:05:56.509631    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.509641    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.509651    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.513692    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.513814    3824 default_sa.go:45] found service account: "default"
	I0818 12:05:56.513823    3824 default_sa.go:55] duration metric: took 194.201187ms for default service account to be created ...
	I0818 12:05:56.513831    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:05:56.708948    3824 request.go:632] Waited for 195.078219ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709031    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:05:56.709042    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.709053    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.709059    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.714162    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:05:56.719538    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:05:56.719553    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719567    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 12:05:56.719573    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:05:56.719577    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:05:56.719580    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:05:56.719584    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:05:56.719587    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:05:56.719589    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:05:56.719593    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:05:56.719596    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:05:56.719598    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:05:56.719602    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:05:56.719605    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:05:56.719608    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:05:56.719612    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:05:56.719614    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:05:56.719617    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:05:56.719620    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:05:56.719622    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:05:56.719625    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:05:56.719627    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:05:56.719630    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:05:56.719633    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:05:56.719636    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:05:56.719638    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:05:56.719641    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:05:56.719645    3824 system_pods.go:126] duration metric: took 205.816796ms to wait for k8s-apps to be running ...
	I0818 12:05:56.719654    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:05:56.719707    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:05:56.730176    3824 system_svc.go:56] duration metric: took 10.521627ms WaitForService to wait for kubelet
	I0818 12:05:56.730190    3824 kubeadm.go:582] duration metric: took 47.816976086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:05:56.730206    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:05:56.908283    3824 request.go:632] Waited for 178.034149ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908349    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:05:56.908360    3824 round_trippers.go:469] Request Headers:
	I0818 12:05:56.908372    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:05:56.908382    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:05:56.912474    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:05:56.913347    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913361    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913370    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913375    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913378    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913381    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913384    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:05:56.913387    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:05:56.913390    3824 node_conditions.go:105] duration metric: took 183.185572ms to run NodePressure ...
	I0818 12:05:56.913403    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:05:56.913420    3824 start.go:255] writing updated cluster config ...
	I0818 12:05:56.936907    3824 out.go:201] 
	I0818 12:05:56.957765    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:05:56.957829    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:56.978649    3824 out.go:177] * Starting "ha-373000-m03" control-plane node in "ha-373000" cluster
	I0818 12:05:57.020705    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:05:57.020729    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:05:57.020850    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:05:57.020861    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:05:57.020943    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.021483    3824 start.go:360] acquireMachinesLock for ha-373000-m03: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:05:57.021533    3824 start.go:364] duration metric: took 37.26µs to acquireMachinesLock for "ha-373000-m03"
	I0818 12:05:57.021546    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:05:57.021559    3824 fix.go:54] fixHost starting: m03
	I0818 12:05:57.021778    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:05:57.021797    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:05:57.030756    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51825
	I0818 12:05:57.031090    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:05:57.031467    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:05:57.031484    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:05:57.031692    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:05:57.031804    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.031899    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetState
	I0818 12:05:57.031976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.032050    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3309
	I0818 12:05:57.032942    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.032990    3824 fix.go:112] recreateIfNeeded on ha-373000-m03: state=Stopped err=<nil>
	I0818 12:05:57.033010    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	W0818 12:05:57.033095    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:05:57.053856    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m03" ...
	I0818 12:05:57.111714    3824 main.go:141] libmachine: (ha-373000-m03) Calling .Start
	I0818 12:05:57.112061    3824 main.go:141] libmachine: (ha-373000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid
	I0818 12:05:57.112084    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.113448    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid 3309 missing from process table
	I0818 12:05:57.113464    3824 main.go:141] libmachine: (ha-373000-m03) DBG | pid 3309 is in state "Stopped"
	I0818 12:05:57.113496    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid...
	I0818 12:05:57.113651    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Using UUID 94c31089-d24d-4aaf-9127-b4e2c0237480
	I0818 12:05:57.139957    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Generated MAC 72:9e:9b:7f:e6:a8
	I0818 12:05:57.139982    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:05:57.140122    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140163    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"94c31089-d24d-4aaf-9127-b4e2c0237480", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b2660)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:05:57.140207    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "94c31089-d24d-4aaf-9127-b4e2c0237480", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:05:57.140253    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 94c31089-d24d-4aaf-9127-b4e2c0237480 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/ha-373000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:05:57.140273    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:05:57.141664    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 DEBUG: hyperkit: Pid is 3862
	I0818 12:05:57.142065    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Attempt 0
	I0818 12:05:57.142080    3824 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:05:57.142152    3824 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3862
	I0818 12:05:57.143976    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Searching for 72:9e:9b:7f:e6:a8 in /var/db/dhcpd_leases ...
	I0818 12:05:57.144038    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:05:57.144051    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:05:57.144071    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:05:57.144076    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:05:57.144085    3824 main.go:141] libmachine: (ha-373000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c39672}
	I0818 12:05:57.144096    3824 main.go:141] libmachine: (ha-373000-m03) DBG | Found match: 72:9e:9b:7f:e6:a8
	I0818 12:05:57.144104    3824 main.go:141] libmachine: (ha-373000-m03) DBG | IP: 192.169.0.7
	I0818 12:05:57.144124    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetConfigRaw
	I0818 12:05:57.144820    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:05:57.145002    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:05:57.145622    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:05:57.145633    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:05:57.145753    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:05:57.145862    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:05:57.145984    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146107    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:05:57.146206    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:05:57.146322    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:05:57.146485    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:05:57.146492    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:05:57.149281    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:05:57.157498    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:05:57.158547    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.158570    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.158621    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.158637    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.538516    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:05:57.538532    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:05:57.653356    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:05:57.653382    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:05:57.653391    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:05:57.653407    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:05:57.654209    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:05:57.654219    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:05:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:06:03.320567    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:06:03.320633    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:06:03.320642    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:06:03.344230    3824 main.go:141] libmachine: (ha-373000-m03) DBG | 2024/08/18 12:06:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:06:32.211281    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:06:32.211301    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211449    3824 buildroot.go:166] provisioning hostname "ha-373000-m03"
	I0818 12:06:32.211462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.211557    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.211637    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.211710    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211795    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.211870    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.212039    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.212206    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.212216    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m03 && echo "ha-373000-m03" | sudo tee /etc/hostname
	I0818 12:06:32.283934    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m03
	
	I0818 12:06:32.283950    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.284081    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.284166    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284244    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.284338    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.284470    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.284619    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.284630    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:06:32.349979    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:06:32.349995    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:06:32.350007    3824 buildroot.go:174] setting up certificates
	I0818 12:06:32.350014    3824 provision.go:84] configureAuth start
	I0818 12:06:32.350021    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetMachineName
	I0818 12:06:32.350153    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:32.350260    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.350351    3824 provision.go:143] copyHostCerts
	I0818 12:06:32.350379    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350451    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:06:32.350457    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:06:32.350602    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:06:32.350813    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350855    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:06:32.350861    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:06:32.350938    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:06:32.351094    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351132    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:06:32.351137    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:06:32.351223    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:06:32.351372    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m03 san=[127.0.0.1 192.169.0.7 ha-373000-m03 localhost minikube]
	I0818 12:06:32.510769    3824 provision.go:177] copyRemoteCerts
	I0818 12:06:32.510826    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:06:32.510842    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.510985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.511073    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.511136    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.511201    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:32.548268    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:06:32.548346    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:06:32.568706    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:06:32.568782    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:06:32.588790    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:06:32.588863    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:06:32.608953    3824 provision.go:87] duration metric: took 258.934195ms to configureAuth
	I0818 12:06:32.608976    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:06:32.609164    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:32.609181    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:32.609317    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.609407    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.609488    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609563    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.609655    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.609780    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.609954    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.609962    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:06:32.671099    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:06:32.671110    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:06:32.671182    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:06:32.671194    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.671327    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.671421    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671505    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.671597    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.671725    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.671862    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.671916    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:06:32.743226    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:06:32.743243    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:32.743369    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:32.743463    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743553    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:32.743628    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:32.743742    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:32.743890    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:32.743902    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:06:34.364405    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:06:34.364421    3824 machine.go:96] duration metric: took 37.219949388s to provisionDockerMachine
	I0818 12:06:34.364429    3824 start.go:293] postStartSetup for "ha-373000-m03" (driver="hyperkit")
	I0818 12:06:34.364441    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:06:34.364454    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.364637    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:06:34.364649    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.364748    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.364846    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.364924    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.364998    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.403257    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:06:34.406448    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:06:34.406462    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:06:34.406565    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:06:34.406753    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:06:34.406760    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:06:34.406965    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:06:34.415199    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:34.434664    3824 start.go:296] duration metric: took 70.221347ms for postStartSetup
	I0818 12:06:34.434685    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.434881    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:06:34.434895    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.434985    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.435078    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.435180    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.435266    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.472820    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:06:34.472878    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:06:34.507076    3824 fix.go:56] duration metric: took 37.486680553s for fixHost
	I0818 12:06:34.507105    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.507242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.507350    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507450    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.507537    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.507661    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:06:34.507812    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0818 12:06:34.507820    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:06:34.567906    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007994.725838648
	
	I0818 12:06:34.567925    3824 fix.go:216] guest clock: 1724007994.725838648
	I0818 12:06:34.567930    3824 fix.go:229] Guest: 2024-08-18 12:06:34.725838648 -0700 PDT Remote: 2024-08-18 12:06:34.507094 -0700 PDT m=+122.564244892 (delta=218.744648ms)
	I0818 12:06:34.567943    3824 fix.go:200] guest clock delta is within tolerance: 218.744648ms
	I0818 12:06:34.567946    3824 start.go:83] releasing machines lock for "ha-373000-m03", held for 37.547576549s
	I0818 12:06:34.567963    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.568094    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:34.591371    3824 out.go:177] * Found network options:
	I0818 12:06:34.612327    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0818 12:06:34.633268    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.633293    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.633308    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633777    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.633931    3824 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:06:34.634012    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:06:34.634042    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	W0818 12:06:34.634075    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:06:34.634099    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:06:34.634164    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:06:34.634177    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:06:34.634183    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634314    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:06:34.634342    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634432    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634462    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:06:34.634570    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:06:34.634589    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:06:34.634716    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	W0818 12:06:34.668553    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:06:34.668615    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:06:34.719514    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:06:34.719537    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.719641    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:34.736086    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:06:34.744327    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:06:34.752345    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:06:34.752395    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:06:34.760474    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.768546    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:06:34.776560    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:06:34.784665    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:06:34.792933    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:06:34.801000    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:06:34.809207    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:06:34.817499    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:06:34.824699    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:06:34.832081    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:34.922497    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:06:34.942245    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:06:34.942318    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:06:34.961594    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:34.977959    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:06:34.994785    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:06:35.006539    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.017278    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:06:35.039389    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:06:35.050815    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:06:35.065658    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:06:35.068495    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:06:35.078248    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:06:35.092006    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:06:35.191577    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:06:35.301568    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:06:35.301599    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:06:35.317876    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:35.413915    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:06:37.731416    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.317550809s)
	I0818 12:06:37.731481    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:06:37.741565    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:37.751381    3824 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:06:37.845484    3824 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:06:37.959362    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.068888    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:06:38.082534    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:06:38.093177    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:38.188351    3824 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:06:38.252978    3824 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:06:38.253055    3824 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:06:38.257331    3824 start.go:563] Will wait 60s for crictl version
	I0818 12:06:38.257383    3824 ssh_runner.go:195] Run: which crictl
	I0818 12:06:38.260636    3824 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:06:38.285125    3824 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:06:38.285203    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.303582    3824 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:06:38.341530    3824 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:06:38.415385    3824 out.go:177]   - env NO_PROXY=192.169.0.5
	I0818 12:06:38.457289    3824 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0818 12:06:38.478242    3824 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:06:38.478613    3824 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:06:38.483129    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:38.492823    3824 mustload.go:65] Loading cluster: ha-373000
	I0818 12:06:38.493001    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:38.493248    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.493270    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.502531    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51847
	I0818 12:06:38.502982    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.503380    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.503398    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.503603    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.503720    3824 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:06:38.503806    3824 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:06:38.503908    3824 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:06:38.504863    3824 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:06:38.505136    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:06:38.505159    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:06:38.514076    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51849
	I0818 12:06:38.514417    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:06:38.514734    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:06:38.514748    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:06:38.514977    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:06:38.515088    3824 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:06:38.515180    3824 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.7
	I0818 12:06:38.515186    3824 certs.go:194] generating shared ca certs ...
	I0818 12:06:38.515198    3824 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:06:38.515378    3824 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:06:38.515454    3824 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:06:38.515480    3824 certs.go:256] generating profile certs ...
	I0818 12:06:38.515601    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:06:38.515691    3824 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.a796c580
	I0818 12:06:38.515764    3824 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:06:38.515772    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:06:38.515792    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:06:38.515811    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:06:38.515836    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:06:38.515854    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:06:38.515881    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:06:38.515909    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:06:38.515932    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:06:38.516021    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:06:38.516070    3824 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:06:38.516079    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:06:38.516113    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:06:38.516146    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:06:38.516176    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:06:38.516242    3824 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:06:38.516275    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.516297    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.516315    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:06:38.516339    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:06:38.516428    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:06:38.516506    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:06:38.516591    3824 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:06:38.516676    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:06:38.545460    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 12:06:38.549008    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 12:06:38.556894    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 12:06:38.559945    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0818 12:06:38.573932    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 12:06:38.577300    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 12:06:38.585295    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 12:06:38.588495    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 12:06:38.596413    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 12:06:38.600019    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 12:06:38.608205    3824 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 12:06:38.612275    3824 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 12:06:38.620061    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:06:38.640273    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:06:38.660114    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:06:38.679901    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:06:38.699819    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 12:06:38.718980    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:06:38.739258    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:06:38.759233    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:06:38.779159    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:06:38.799128    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:06:38.819459    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:06:38.839485    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 12:06:38.853931    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0818 12:06:38.867660    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 12:06:38.881016    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 12:06:38.894734    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 12:06:38.908655    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 12:06:38.922215    3824 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 12:06:38.936152    3824 ssh_runner.go:195] Run: openssl version
	I0818 12:06:38.940292    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:06:38.948670    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.951984    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.952025    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:06:38.956301    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:06:38.964945    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:06:38.973410    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976837    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.976884    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:06:38.980998    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:06:38.989539    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:06:38.998105    3824 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001464    3824 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.001509    3824 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:06:39.005796    3824 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:06:39.014114    3824 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:06:39.017475    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:06:39.021708    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:06:39.025941    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:06:39.030326    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:06:39.034611    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:06:39.038815    3824 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:06:39.043094    3824 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.31.0 docker true true} ...
	I0818 12:06:39.043154    3824 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:06:39.043171    3824 kube-vip.go:115] generating kube-vip config ...
	I0818 12:06:39.043216    3824 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:06:39.056006    3824 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:06:39.056050    3824 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:06:39.056106    3824 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:06:39.064688    3824 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:06:39.064746    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 12:06:39.073725    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 12:06:39.087281    3824 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:06:39.101247    3824 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:06:39.115342    3824 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:06:39.118445    3824 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:06:39.127826    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.220452    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.236932    3824 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:06:39.237124    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:06:39.258433    3824 out.go:177] * Verifying Kubernetes components...
	I0818 12:06:39.298999    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:06:39.406166    3824 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:06:39.422783    3824 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:06:39.423042    3824 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xcb43f60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 12:06:39.423091    3824 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0818 12:06:39.423285    3824 node_ready.go:35] waiting up to 6m0s for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.423367    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.423379    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.423392    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.423403    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.425980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.924516    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:39.924530    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.924537    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.924541    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.927146    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.927756    3824 node_ready.go:49] node "ha-373000-m03" has status "Ready":"True"
	I0818 12:06:39.927766    3824 node_ready.go:38] duration metric: took 504.486873ms for node "ha-373000-m03" to be "Ready" ...
	I0818 12:06:39.927772    3824 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:06:39.927816    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:06:39.927826    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.927832    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.927835    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.932950    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:39.939217    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.939280    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-hv98f
	I0818 12:06:39.939289    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.939296    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.939299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.942170    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.942704    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.942712    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.942718    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.942722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945194    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.945502    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.945513    3824 pod_ready.go:82] duration metric: took 6.280436ms for pod "coredns-6f6b679f8f-hv98f" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945527    3824 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.945573    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-rcfmc
	I0818 12:06:39.945579    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.945596    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.945604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.947744    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.948231    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.948239    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.948244    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.948249    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.949935    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.950306    3824 pod_ready.go:93] pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.950316    3824 pod_ready.go:82] duration metric: took 4.783283ms for pod "coredns-6f6b679f8f-rcfmc" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950324    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.950360    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000
	I0818 12:06:39.950366    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.950371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.950376    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952196    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.952623    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:39.952632    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.952637    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.952640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954395    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.954700    3824 pod_ready.go:93] pod "etcd-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.954713    3824 pod_ready.go:82] duration metric: took 4.380752ms for pod "etcd-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954728    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.954770    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m02
	I0818 12:06:39.954775    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.954781    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.954784    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.956816    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:39.957264    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:39.957272    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:39.957278    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:39.957281    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:39.958954    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:39.959393    3824 pod_ready.go:93] pod "etcd-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:39.959403    3824 pod_ready.go:82] duration metric: took 4.669444ms for pod "etcd-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:39.959410    3824 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:40.124592    3824 request.go:632] Waited for 165.145751ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124629    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.124633    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.124639    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.124645    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.127273    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.325487    3824 request.go:632] Waited for 197.85948ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325561    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.325576    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.325592    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.325603    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.328610    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:40.524678    3824 request.go:632] Waited for 64.314725ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524779    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.524787    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.524794    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.524800    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.534379    3824 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0818 12:06:40.724687    3824 request.go:632] Waited for 189.641273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724767    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:40.724780    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.724790    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.724795    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.727857    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:40.960310    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:40.960323    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:40.960330    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:40.960334    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:40.962980    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.125004    3824 request.go:632] Waited for 161.489984ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125051    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.125059    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.125068    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.125074    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.127660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.459552    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.459565    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.459572    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.459576    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.462348    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.524806    3824 request.go:632] Waited for 61.84167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524878    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.524889    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.524897    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.524902    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.527287    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.959574    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:41.959588    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.959594    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.959599    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962051    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.962553    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:41.962563    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:41.962570    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:41.962588    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:41.964779    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:41.965088    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:42.461485    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.461498    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.461504    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.461507    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.463825    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.464339    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.464350    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.464358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.464363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.466190    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:42.960283    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:42.960301    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.960308    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.960313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.962745    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:42.963399    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:42.963408    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:42.963415    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:42.963420    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:42.965667    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.460941    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.460961    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.460973    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.460980    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.464358    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:43.464865    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.464876    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.464885    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.464903    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.466644    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.960616    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:43.960635    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.960662    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.960670    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.963241    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:43.963592    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:43.963599    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:43.963605    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:43.963609    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:43.965295    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:43.965679    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:44.459655    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.459670    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.459678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.459684    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.462938    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.463437    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.463446    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.463453    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.463456    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.465455    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:44.960738    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:44.960764    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.960775    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.960781    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.964513    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:44.965181    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:44.965189    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:44.965195    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:44.965198    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:44.967125    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.459544    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.459557    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.459564    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.459567    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.461789    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.462287    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.462295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.462301    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.462304    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.463842    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:45.959866    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:45.959882    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.959891    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.959895    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962334    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:45.962673    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:45.962680    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:45.962686    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:45.962691    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:45.964328    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:46.460263    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.460278    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.460302    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.460307    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.462738    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.463273    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.463281    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.463287    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.463290    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.465376    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.465623    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:46.960651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:46.960728    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.960746    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.960756    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.963413    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:46.963863    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:46.963871    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:46.963877    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:46.963879    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:46.965522    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.460546    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.460559    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.460565    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.460569    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462347    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:47.462831    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.462839    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.462845    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.462849    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.465797    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:47.959568    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:47.959595    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.959606    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.959613    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.962968    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:47.963654    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:47.963665    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:47.963673    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:47.963678    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:47.965348    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.460843    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.460865    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.460878    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.460888    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.464226    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.464806    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.464814    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.464820    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.464824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.466523    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:48.466821    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:48.960506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:48.960532    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.960544    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.960549    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.964130    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:48.964586    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:48.964596    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:48.964604    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:48.964610    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:48.966425    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:49.459390    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.459415    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.459427    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.459433    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.463245    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.463769    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.463781    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.463788    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.463792    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.466543    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:49.959537    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:49.959561    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.959571    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.959577    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.962607    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:49.963064    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:49.963072    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:49.963077    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:49.963081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:49.964839    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:50.460746    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.460763    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.460770    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.460773    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.463380    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.463793    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.463801    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.463807    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.463810    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.466499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:50.466793    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:50.960528    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:50.960552    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.960563    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.960569    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.964095    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:50.964754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:50.964765    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:50.964773    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:50.964779    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:50.966674    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.459276    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.459296    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.459307    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.459323    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.462737    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.463318    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.463325    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.463331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.463342    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.465140    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:51.960158    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:51.960178    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.960190    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.960196    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.963615    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:51.964184    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:51.964194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:51.964201    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:51.964208    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:51.966317    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.459260    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.459275    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.459284    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.459299    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.461808    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:52.462199    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.462207    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.462214    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.462217    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.464015    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:52.959295    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:52.959313    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.959324    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.959330    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.963923    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:52.964435    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:52.964443    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:52.964449    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:52.964452    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:52.967830    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:52.968298    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:53.459316    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.459335    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.459343    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.459349    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.464675    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.465233    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.465241    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.465248    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.465251    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.470328    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:06:53.960317    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:53.960343    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.960354    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.960360    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.964420    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:06:53.965229    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:53.965236    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:53.965242    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:53.965246    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:53.967660    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.459303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.459315    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.459321    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.459324    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.461902    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.462298    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.462305    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.462310    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.462313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.464747    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:54.960293    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:54.960319    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.960331    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.960339    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.963847    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:54.964473    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:54.964483    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:54.964491    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:54.964497    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:54.966299    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.459778    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.459804    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.459816    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.459824    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.463395    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.464072    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.464083    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.464091    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.464095    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.465859    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:55.466228    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:55.959274    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:55.959295    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.959306    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.959313    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.962842    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:55.963214    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:55.963221    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:55.963227    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:55.963230    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:55.964851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.459680    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.459702    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.459713    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.459719    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.463508    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.463978    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.463986    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.463993    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.463996    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.465851    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:56.959108    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:56.959168    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.959180    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.959188    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.962593    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:56.963101    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:56.963111    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:56.963119    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:56.963124    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:56.964734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.458993    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.459009    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.459033    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.459044    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.461199    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.461630    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.461638    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.461644    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.461647    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.464799    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:57.959429    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:57.959455    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.959466    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.959471    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962366    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:57.962731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:57.962739    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:57.962745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:57.962748    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:57.964355    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:57.964866    3824 pod_ready.go:103] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"False"
	I0818 12:06:58.459677    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.459697    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.459709    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.459714    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.463092    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.463794    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.463802    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.463809    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.463811    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.465563    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.959591    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-373000-m03
	I0818 12:06:58.959612    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.959623    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.959631    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.963002    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:58.964342    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.964361    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.964371    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.964377    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.966371    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.966690    3824 pod_ready.go:93] pod "etcd-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.966699    3824 pod_ready.go:82] duration metric: took 19.007875373s for pod "etcd-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966710    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.966744    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000
	I0818 12:06:58.966749    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.966754    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.966759    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.968551    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.969049    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.969056    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.969062    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.969065    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.970647    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.971055    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.971063    3824 pod_ready.go:82] duration metric: took 4.347127ms for pod "kube-apiserver-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971069    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.971100    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m02
	I0818 12:06:58.971105    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.971110    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.971116    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.972830    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.973265    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:58.973273    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.973279    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.973282    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.974809    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.975155    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.975165    3824 pod_ready.go:82] duration metric: took 4.091205ms for pod "kube-apiserver-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975172    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.975209    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-373000-m03
	I0818 12:06:58.975214    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.975219    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.975223    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.976734    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.977185    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:58.977194    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.977199    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.977203    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.978595    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.978942    3824 pod_ready.go:93] pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.978951    3824 pod_ready.go:82] duration metric: took 3.77353ms for pod "kube-apiserver-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978957    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.978988    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000
	I0818 12:06:58.978993    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.978999    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.979003    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.980398    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.980845    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:06:58.980852    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:58.980858    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:58.980861    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:58.982260    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:06:58.982600    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:58.982608    3824 pod_ready.go:82] duration metric: took 3.645796ms for pod "kube-controller-manager-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:58.982614    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.160214    3824 request.go:632] Waited for 177.557781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160303    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m02
	I0818 12:06:59.160314    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.160334    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.160341    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.163272    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.360510    3824 request.go:632] Waited for 196.433912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360620    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:06:59.360630    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.360640    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.360649    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.364048    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.364505    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.364516    3824 pod_ready.go:82] duration metric: took 381.90816ms for pod "kube-controller-manager-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.364525    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.559640    3824 request.go:632] Waited for 195.079426ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559699    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-373000-m03
	I0818 12:06:59.559705    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.559711    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.559715    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.561728    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:06:59.760676    3824 request.go:632] Waited for 198.422535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760731    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:06:59.760742    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.760754    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.760761    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.764272    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:06:59.764909    3824 pod_ready.go:93] pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:06:59.764919    3824 pod_ready.go:82] duration metric: took 400.401698ms for pod "kube-controller-manager-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.764926    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:06:59.960270    3824 request.go:632] Waited for 195.290695ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960398    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2xkhp
	I0818 12:06:59.960409    3824 round_trippers.go:469] Request Headers:
	I0818 12:06:59.960422    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:06:59.960432    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:06:59.963585    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.161284    3824 request.go:632] Waited for 197.152508ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161348    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:00.161357    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.161364    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.161368    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.163499    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.163968    3824 pod_ready.go:93] pod "kube-proxy-2xkhp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.163978    3824 pod_ready.go:82] duration metric: took 399.059814ms for pod "kube-proxy-2xkhp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.163984    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.360550    3824 request.go:632] Waited for 196.524224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360645    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5hg88
	I0818 12:07:00.360674    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.360705    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.360715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.364230    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.559710    3824 request.go:632] Waited for 194.892476ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559754    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:00.559760    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.559767    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.559770    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.561706    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:00.562031    3824 pod_ready.go:93] pod "kube-proxy-5hg88" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.562041    3824 pod_ready.go:82] duration metric: took 398.063984ms for pod "kube-proxy-5hg88" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.562048    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.760849    3824 request.go:632] Waited for 198.76912ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760881    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bprqp
	I0818 12:07:00.760887    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.760893    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.760897    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.763176    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:00.959686    3824 request.go:632] Waited for 195.875972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959818    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:00.959837    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:00.959848    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:00.959855    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:00.963072    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:00.963632    3824 pod_ready.go:93] pod "kube-proxy-bprqp" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:00.963645    3824 pod_ready.go:82] duration metric: took 401.603061ms for pod "kube-proxy-bprqp" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:00.963654    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.160451    3824 request.go:632] Waited for 196.719541ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160506    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7zlx
	I0818 12:07:01.160515    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.160526    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.160534    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.163885    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.360939    3824 request.go:632] Waited for 196.415223ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361054    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m04
	I0818 12:07:01.361063    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.361074    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.361081    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.364720    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.365356    3824 pod_ready.go:98] node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365374    3824 pod_ready.go:82] duration metric: took 401.724878ms for pod "kube-proxy-l7zlx" in "kube-system" namespace to be "Ready" ...
	E0818 12:07:01.365383    3824 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-373000-m04" hosting pod "kube-proxy-l7zlx" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-373000-m04" has status "Ready":"Unknown"
	I0818 12:07:01.365389    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.560679    3824 request.go:632] Waited for 195.242196ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560723    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000
	I0818 12:07:01.560732    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.560740    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.560745    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.562645    3824 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 12:07:01.761089    3824 request.go:632] Waited for 198.042947ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761190    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000
	I0818 12:07:01.761200    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.761212    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.761218    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.764398    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:01.764800    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:01.764826    3824 pod_ready.go:82] duration metric: took 399.443504ms for pod "kube-scheduler-ha-373000" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.764834    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:01.959600    3824 request.go:632] Waited for 194.717673ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959651    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m02
	I0818 12:07:01.959662    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:01.959672    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:01.959678    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:01.963127    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.159886    3824 request.go:632] Waited for 196.172195ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159958    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m02
	I0818 12:07:02.159975    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.159988    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.159997    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.163322    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.163764    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.163775    3824 pod_ready.go:82] duration metric: took 398.944902ms for pod "kube-scheduler-ha-373000-m02" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.163781    3824 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.359608    3824 request.go:632] Waited for 195.759022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359664    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-373000-m03
	I0818 12:07:02.359677    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.359715    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.359722    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.363386    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.560395    3824 request.go:632] Waited for 196.314469ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560474    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-373000-m03
	I0818 12:07:02.560483    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.560491    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.560495    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.563041    3824 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 12:07:02.563443    3824 pod_ready.go:93] pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 12:07:02.563453    3824 pod_ready.go:82] duration metric: took 399.678634ms for pod "kube-scheduler-ha-373000-m03" in "kube-system" namespace to be "Ready" ...
	I0818 12:07:02.563460    3824 pod_ready.go:39] duration metric: took 22.636385926s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 12:07:02.563470    3824 api_server.go:52] waiting for apiserver process to appear ...
	I0818 12:07:02.563523    3824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:07:02.576904    3824 api_server.go:72] duration metric: took 23.340671308s to wait for apiserver process to appear ...
	I0818 12:07:02.576917    3824 api_server.go:88] waiting for apiserver healthz status ...
	I0818 12:07:02.576928    3824 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0818 12:07:02.581021    3824 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0818 12:07:02.581063    3824 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0818 12:07:02.581069    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.581075    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.581080    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.581650    3824 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 12:07:02.581745    3824 api_server.go:141] control plane version: v1.31.0
	I0818 12:07:02.581754    3824 api_server.go:131] duration metric: took 4.833461ms to wait for apiserver health ...
	I0818 12:07:02.581759    3824 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 12:07:02.760273    3824 request.go:632] Waited for 178.46854ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760344    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:02.760352    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.760358    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.760361    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.765147    3824 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 12:07:02.770514    3824 system_pods.go:59] 26 kube-system pods found
	I0818 12:07:02.770527    3824 system_pods.go:61] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:02.770531    3824 system_pods.go:61] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:02.770534    3824 system_pods.go:61] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:02.770537    3824 system_pods.go:61] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:02.770539    3824 system_pods.go:61] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:02.770545    3824 system_pods.go:61] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:02.770549    3824 system_pods.go:61] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:02.770552    3824 system_pods.go:61] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:02.770556    3824 system_pods.go:61] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:02.770558    3824 system_pods.go:61] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:02.770561    3824 system_pods.go:61] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:02.770564    3824 system_pods.go:61] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:02.770566    3824 system_pods.go:61] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:02.770570    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:02.770573    3824 system_pods.go:61] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:02.770577    3824 system_pods.go:61] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:02.770580    3824 system_pods.go:61] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:02.770583    3824 system_pods.go:61] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:02.770585    3824 system_pods.go:61] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:02.770588    3824 system_pods.go:61] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:02.770590    3824 system_pods.go:61] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:02.770593    3824 system_pods.go:61] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:02.770596    3824 system_pods.go:61] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:02.770598    3824 system_pods.go:61] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:02.770601    3824 system_pods.go:61] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:02.770603    3824 system_pods.go:61] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:02.770607    3824 system_pods.go:74] duration metric: took 188.849851ms to wait for pod list to return data ...
	I0818 12:07:02.770613    3824 default_sa.go:34] waiting for default service account to be created ...
	I0818 12:07:02.959522    3824 request.go:632] Waited for 188.86655ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959578    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0818 12:07:02.959587    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:02.959598    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:02.959608    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:02.963054    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:02.963263    3824 default_sa.go:45] found service account: "default"
	I0818 12:07:02.963277    3824 default_sa.go:55] duration metric: took 192.665025ms for default service account to be created ...
	I0818 12:07:02.963284    3824 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 12:07:03.160239    3824 request.go:632] Waited for 196.905811ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160320    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0818 12:07:03.160329    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.160341    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.160363    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.165404    3824 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 12:07:03.170694    3824 system_pods.go:86] 26 kube-system pods found
	I0818 12:07:03.170706    3824 system_pods.go:89] "coredns-6f6b679f8f-hv98f" [3ebd3bbf-bcb9-4afc-80f3-eaeb314da8eb] Running
	I0818 12:07:03.170710    3824 system_pods.go:89] "coredns-6f6b679f8f-rcfmc" [904f6af6-c2de-4bc2-bc1d-f87e209ef4ed] Running
	I0818 12:07:03.170714    3824 system_pods.go:89] "etcd-ha-373000" [29cf5b88-f2b5-40d4-9f40-5aedcc75c4d1] Running
	I0818 12:07:03.170717    3824 system_pods.go:89] "etcd-ha-373000-m02" [6b2f793b-6d8c-4580-b555-2be21fb70246] Running
	I0818 12:07:03.170720    3824 system_pods.go:89] "etcd-ha-373000-m03" [377788a8-77e7-4e5e-a488-9791f8e26b32] Running
	I0818 12:07:03.170723    3824 system_pods.go:89] "kindnet-2gf5h" [ff15d17a-fb96-4721-847f-13f5c0e2613a] Running
	I0818 12:07:03.170725    3824 system_pods.go:89] "kindnet-k4c4p" [5219ec54-5c0a-471a-a99f-2823b4f944d9] Running
	I0818 12:07:03.170728    3824 system_pods.go:89] "kindnet-q7ghp" [933c4eb9-1d38-4575-a873-183f2fd31b25] Running
	I0818 12:07:03.170731    3824 system_pods.go:89] "kindnet-wxcx9" [11d95b76-db04-43de-9d3d-ce2147fbed21] Running
	I0818 12:07:03.170733    3824 system_pods.go:89] "kube-apiserver-ha-373000" [cc4f174d-0e3c-4aa8-b039-0bf57d6b0228] Running
	I0818 12:07:03.170737    3824 system_pods.go:89] "kube-apiserver-ha-373000-m02" [b72f02e8-46f2-46d4-8e73-03281cc424fc] Running
	I0818 12:07:03.170740    3824 system_pods.go:89] "kube-apiserver-ha-373000-m03" [5d3892b8-07e1-40a0-b4dc-4d9d9e8bc254] Running
	I0818 12:07:03.170743    3824 system_pods.go:89] "kube-controller-manager-ha-373000" [78b55cfd-8f33-4071-99b0-900fcb256ed1] Running
	I0818 12:07:03.170746    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m02" [e261db19-d00f-41d1-aaf2-c1b77067c217] Running
	I0818 12:07:03.170749    3824 system_pods.go:89] "kube-controller-manager-ha-373000-m03" [fa71d7d9-904d-4df6-afe9-768758e32854] Running
	I0818 12:07:03.170752    3824 system_pods.go:89] "kube-proxy-2xkhp" [6a1cb8ad-e118-4db0-8852-b2d4973f536a] Running
	I0818 12:07:03.170755    3824 system_pods.go:89] "kube-proxy-5hg88" [4d00bf3e-8817-4220-91c5-7085fa453fa6] Running
	I0818 12:07:03.170757    3824 system_pods.go:89] "kube-proxy-bprqp" [4cf4fd8d-01fb-4e36-bede-6a44ab84b7bf] Running
	I0818 12:07:03.170760    3824 system_pods.go:89] "kube-proxy-l7zlx" [853afdf8-598a-435c-8c48-233287580493] Running
	I0818 12:07:03.170763    3824 system_pods.go:89] "kube-scheduler-ha-373000" [b1284a96-b76a-4909-9301-bfff7bbef8d6] Running
	I0818 12:07:03.170765    3824 system_pods.go:89] "kube-scheduler-ha-373000-m02" [fdfdc014-0b22-48aa-a1c6-5885bc67d472] Running
	I0818 12:07:03.170769    3824 system_pods.go:89] "kube-scheduler-ha-373000-m03" [89e378c4-274f-4aa1-8683-127061a7033e] Running
	I0818 12:07:03.170772    3824 system_pods.go:89] "kube-vip-ha-373000" [8df6a228-9693-4a21-af87-1bc9fc2af995] Running
	I0818 12:07:03.170774    3824 system_pods.go:89] "kube-vip-ha-373000-m02" [9cf001da-2ec7-4c04-bcb8-baa2bd4626f4] Running
	I0818 12:07:03.170777    3824 system_pods.go:89] "kube-vip-ha-373000-m03" [2e8a44eb-d965-4a90-ae08-464c7064ae17] Running
	I0818 12:07:03.170779    3824 system_pods.go:89] "storage-provisioner" [aa9c6f5d-6c1e-4901-83bb-62bc420ea044] Running
	I0818 12:07:03.170784    3824 system_pods.go:126] duration metric: took 207.500936ms to wait for k8s-apps to be running ...
	I0818 12:07:03.170789    3824 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 12:07:03.170841    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:07:03.182482    3824 system_svc.go:56] duration metric: took 11.680891ms WaitForService to wait for kubelet
	I0818 12:07:03.182502    3824 kubeadm.go:582] duration metric: took 23.946290558s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:07:03.182518    3824 node_conditions.go:102] verifying NodePressure condition ...
	I0818 12:07:03.360851    3824 request.go:632] Waited for 178.265424ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360972    3824 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0818 12:07:03.360984    3824 round_trippers.go:469] Request Headers:
	I0818 12:07:03.360994    3824 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0818 12:07:03.361004    3824 round_trippers.go:473]     Accept: application/json, */*
	I0818 12:07:03.364644    3824 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 12:07:03.365979    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365989    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.365996    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.365999    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366002    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366005    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366008    3824 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 12:07:03.366011    3824 node_conditions.go:123] node cpu capacity is 2
	I0818 12:07:03.366014    3824 node_conditions.go:105] duration metric: took 183.498142ms to run NodePressure ...
	I0818 12:07:03.366022    3824 start.go:241] waiting for startup goroutines ...
	I0818 12:07:03.366037    3824 start.go:255] writing updated cluster config ...
	I0818 12:07:03.387453    3824 out.go:201] 
	I0818 12:07:03.408870    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:03.408996    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.431363    3824 out.go:177] * Starting "ha-373000-m04" worker node in "ha-373000" cluster
	I0818 12:07:03.473303    3824 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:07:03.473331    3824 cache.go:56] Caching tarball of preloaded images
	I0818 12:07:03.473487    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:07:03.473500    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:07:03.473589    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.474432    3824 start.go:360] acquireMachinesLock for ha-373000-m04: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:07:03.474523    3824 start.go:364] duration metric: took 71.686µs to acquireMachinesLock for "ha-373000-m04"
	I0818 12:07:03.474542    3824 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:07:03.474548    3824 fix.go:54] fixHost starting: m04
	I0818 12:07:03.474855    3824 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:07:03.474882    3824 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:07:03.484549    3824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51853
	I0818 12:07:03.484938    3824 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:07:03.485323    3824 main.go:141] libmachine: Using API Version  1
	I0818 12:07:03.485338    3824 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:07:03.485563    3824 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:07:03.485683    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.485781    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:07:03.485864    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.485969    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3421
	I0818 12:07:03.486880    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid 3421 missing from process table
	I0818 12:07:03.486901    3824 fix.go:112] recreateIfNeeded on ha-373000-m04: state=Stopped err=<nil>
	I0818 12:07:03.486912    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	W0818 12:07:03.486988    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:07:03.508504    3824 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m04" ...
	I0818 12:07:03.582318    3824 main.go:141] libmachine: (ha-373000-m04) Calling .Start
	I0818 12:07:03.582606    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.582712    3824 main.go:141] libmachine: (ha-373000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid
	I0818 12:07:03.582838    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Using UUID 421610dc-2abf-427c-8c2b-c85701e511a2
	I0818 12:07:03.610902    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Generated MAC f2:8c:91:ee:dd:c0
	I0818 12:07:03.610923    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:07:03.611054    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611081    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"421610dc-2abf-427c-8c2b-c85701e511a2", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299560)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:07:03.611126    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "421610dc-2abf-427c-8c2b-c85701e511a2", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:07:03.611176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 421610dc-2abf-427c-8c2b-c85701e511a2 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/ha-373000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:07:03.611189    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:07:03.612626    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 DEBUG: hyperkit: Pid is 3877
	I0818 12:07:03.613079    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Attempt 0
	I0818 12:07:03.613097    3824 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:07:03.613147    3824 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3877
	I0818 12:07:03.614336    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Searching for f2:8c:91:ee:dd:c0 in /var/db/dhcpd_leases ...
	I0818 12:07:03.614413    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:07:03.614438    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c3979e}
	I0818 12:07:03.614464    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:07:03.614488    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:07:03.614500    3824 main.go:141] libmachine: (ha-373000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c245a6}
	I0818 12:07:03.614507    3824 main.go:141] libmachine: (ha-373000-m04) DBG | Found match: f2:8c:91:ee:dd:c0
	I0818 12:07:03.614515    3824 main.go:141] libmachine: (ha-373000-m04) DBG | IP: 192.169.0.8
	I0818 12:07:03.614531    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetConfigRaw
	I0818 12:07:03.615303    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:03.615492    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:07:03.615967    3824 machine.go:93] provisionDockerMachine start ...
	I0818 12:07:03.615979    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:03.616121    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:03.616256    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:03.616397    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616508    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:03.616609    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:03.616727    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:03.616882    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:03.616892    3824 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:07:03.621176    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:07:03.629669    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:07:03.630674    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:03.630697    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:03.630709    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:03.630724    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.012965    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:07:04.012987    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:07:04.127720    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:07:04.127750    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:07:04.127760    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:07:04.127778    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:07:04.128559    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:07:04.128569    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:07:09.784251    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:07:09.784338    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:07:09.784350    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:07:09.808163    3824 main.go:141] libmachine: (ha-373000-m04) DBG | 2024/08/18 12:07:09 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:07:14.674465    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:07:14.674484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674657    3824 buildroot.go:166] provisioning hostname "ha-373000-m04"
	I0818 12:07:14.674669    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.674755    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.674835    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.674920    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675008    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.675105    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.675237    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.675389    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.675398    3824 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m04 && echo "ha-373000-m04" | sudo tee /etc/hostname
	I0818 12:07:14.738016    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m04
	
	I0818 12:07:14.738030    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.738166    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:14.738262    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738354    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:14.738444    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:14.738575    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:14.738730    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:14.738742    3824 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:07:14.800929    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:07:14.800946    3824 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:07:14.800959    3824 buildroot.go:174] setting up certificates
	I0818 12:07:14.800965    3824 provision.go:84] configureAuth start
	I0818 12:07:14.800972    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetMachineName
	I0818 12:07:14.801115    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:14.801241    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:14.801327    3824 provision.go:143] copyHostCerts
	I0818 12:07:14.801357    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801411    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:07:14.801417    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:07:14.801581    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:07:14.801805    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801837    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:07:14.801842    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:07:14.801922    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:07:14.802072    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802105    3824 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:07:14.802110    3824 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:07:14.802180    3824 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:07:14.802329    3824 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m04 san=[127.0.0.1 192.169.0.8 ha-373000-m04 localhost minikube]
	I0818 12:07:15.264268    3824 provision.go:177] copyRemoteCerts
	I0818 12:07:15.264318    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:07:15.264333    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.264514    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.264635    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.264736    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.264840    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:15.297241    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:07:15.297314    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:07:15.317451    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:07:15.317516    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:07:15.337321    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:07:15.337400    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:07:15.357216    3824 provision.go:87] duration metric: took 556.258633ms to configureAuth
	I0818 12:07:15.357236    3824 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:07:15.357403    3824 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:07:15.357417    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:15.357555    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.357641    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.357721    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357806    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.357885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.357993    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.358121    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.358132    3824 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:07:15.410788    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:07:15.410801    3824 buildroot.go:70] root file system type: tmpfs
	I0818 12:07:15.410873    3824 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:07:15.410885    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.411015    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.411098    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411194    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.411280    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.411394    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.411541    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.411587    3824 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:07:15.476241    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:07:15.476261    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:15.476401    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:15.476490    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476597    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:15.476697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:15.476838    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:15.476977    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:15.476990    3824 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:07:17.071913    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:07:17.071932    3824 machine.go:96] duration metric: took 13.456373306s to provisionDockerMachine
	I0818 12:07:17.071939    3824 start.go:293] postStartSetup for "ha-373000-m04" (driver="hyperkit")
	I0818 12:07:17.071946    3824 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:07:17.071960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.072162    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:07:17.072176    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.072278    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.072367    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.072484    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.072586    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.114832    3824 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:07:17.118934    3824 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:07:17.118950    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:07:17.119044    3824 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:07:17.119187    3824 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:07:17.119194    3824 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:07:17.119347    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:07:17.131072    3824 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:07:17.162572    3824 start.go:296] duration metric: took 90.627646ms for postStartSetup
	I0818 12:07:17.162595    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.162766    3824 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:07:17.162780    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.162865    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.162946    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.163031    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.163111    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.196597    3824 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:07:17.196659    3824 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:07:17.249652    3824 fix.go:56] duration metric: took 13.775528593s for fixHost
	I0818 12:07:17.249680    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.249818    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.249905    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.249992    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.250086    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.250222    3824 main.go:141] libmachine: Using SSH client type: native
	I0818 12:07:17.250363    3824 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb48aea0] 0xb48dc00 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0818 12:07:17.250370    3824 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:07:17.303909    3824 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008037.336410727
	
	I0818 12:07:17.303922    3824 fix.go:216] guest clock: 1724008037.336410727
	I0818 12:07:17.303927    3824 fix.go:229] Guest: 2024-08-18 12:07:17.336410727 -0700 PDT Remote: 2024-08-18 12:07:17.249669 -0700 PDT m=+165.308150896 (delta=86.741727ms)
	I0818 12:07:17.303937    3824 fix.go:200] guest clock delta is within tolerance: 86.741727ms
	I0818 12:07:17.303941    3824 start.go:83] releasing machines lock for "ha-373000-m04", held for 13.829839932s
	I0818 12:07:17.303960    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.304093    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:07:17.325783    3824 out.go:177] * Found network options:
	I0818 12:07:17.347322    3824 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0818 12:07:17.368151    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368179    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.368192    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.368225    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368728    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368862    3824 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:07:17.368947    3824 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:07:17.368991    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	W0818 12:07:17.369043    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369069    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 12:07:17.369086    3824 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:07:17.369158    3824 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:07:17.369174    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369197    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:07:17.369352    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369370    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:07:17.369488    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369507    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:07:17.369677    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:07:17.369697    3824 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:07:17.369814    3824 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	W0818 12:07:17.399808    3824 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:07:17.399874    3824 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:07:17.453508    3824 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:07:17.453527    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.453602    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.468947    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:07:17.477909    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:07:17.486368    3824 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:07:17.486429    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:07:17.495070    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.503908    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:07:17.512255    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:07:17.520784    3824 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:07:17.529449    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:07:17.538408    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:07:17.546916    3824 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:07:17.555361    3824 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:07:17.562930    3824 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:07:17.571624    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:17.670212    3824 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:07:17.690532    3824 start.go:495] detecting cgroup driver to use...
	I0818 12:07:17.690608    3824 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:07:17.710894    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.721349    3824 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:07:17.738837    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:07:17.750943    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.762092    3824 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:07:17.786808    3824 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:07:17.798198    3824 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:07:17.813512    3824 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:07:17.816407    3824 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:07:17.824320    3824 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:07:17.838071    3824 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:07:17.938835    3824 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:07:18.032593    3824 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:07:18.032616    3824 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:07:18.046682    3824 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:07:18.149082    3824 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:08:19.094745    3824 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.947540366s)
	I0818 12:08:19.094811    3824 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:08:19.130194    3824 out.go:201] 
	W0818 12:08:19.167950    3824 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:07:15 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.789565294Z" level=info msg="Starting up"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.790497979Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:07:15 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:15.791060023Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=491
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.808949895Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.823962995Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824017555Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824063133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824074046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824245628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824285399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824412941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824458745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824472526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824481113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824628618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.824862154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826539571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826578591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826700099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826735930Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826894261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.826943257Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828221494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828269425Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828283877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828294494Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828306440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828355173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828863798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.828968570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829012385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829087106Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829133358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829171270Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829205360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829239671Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829274394Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829307961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829340520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829370638Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829531056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829845805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829883191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829896300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829908724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829919786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829928151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829938442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829947500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829958637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829966701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.829975548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830016884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830031620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830069034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830080580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830090618Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830119633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830130594Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830138753Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830147234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830156530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830165223Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830172746Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830327211Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830423458Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830503251Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:07:15 ha-373000-m04 dockerd[491]: time="2024-08-18T19:07:15.830581618Z" level=info msg="containerd successfully booted in 0.022620s"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.817938076Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.831116800Z" level=info msg="Loading containers: start."
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.929784593Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:07:16 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:16.991389466Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.063078080Z" level=info msg="Loading containers: done."
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074071701Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.074231517Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097399297Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:07:17 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:17.097566032Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:07:17 ha-373000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:07:18 ha-373000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.209129651Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210124874Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210325925Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210407877Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:07:18 ha-373000-m04 dockerd[485]: time="2024-08-18T19:07:18.210420112Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:07:19 ha-373000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:07:19 ha-373000-m04 dockerd[1176]: time="2024-08-18T19:07:19.260443864Z" level=info msg="Starting up"
	Aug 18 19:08:19 ha-373000-m04 dockerd[1176]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:08:19 ha-373000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:08:19.168043    3824 out.go:270] * 
	W0818 12:08:19.169228    3824 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:08:19.232626    3824 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.397909257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.400610172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc1f2fb60f7c58ea2a794ed7b3890a722b7e02d695c8b7d8be84e17d817f22ff/resolv.conf as [nameserver 192.169.0.1]"
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3772c138aa65e84e76733835788a3b5c8c0f94bde29eaad82c89e1b944ad3bff/resolv.conf as [nameserver 192.169.0.1]"
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550381570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550475397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550487498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.550588600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 cri-dockerd[1420]: time="2024-08-18T19:05:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb4ed9664dda977cc9b021fafae44e8ee00272a594ba9ddcb993b4d0d5f0db6f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611318900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611621875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.611734513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.612037359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.725056501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.725946033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.726057340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:05:42 ha-373000 dockerd[1167]: time="2024-08-18T19:05:42.726259789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:06:13 ha-373000 dockerd[1161]: time="2024-08-18T19:06:13.034511897Z" level=info msg="ignoring event" container=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034748077Z" level=info msg="shim disconnected" id=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b namespace=moby
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034780713Z" level=warning msg="cleaning up after shim disconnected" id=b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b namespace=moby
	Aug 18 19:06:13 ha-373000 dockerd[1167]: time="2024-08-18T19:06:13.034787207Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423655859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423798647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423827418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:06:27 ha-373000 dockerd[1167]: time="2024-08-18T19:06:27.423965192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eb459a6cac5c5       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	b857c2fef140c       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner       1                   3772c138aa65e       storage-provisioner
	09b8ded75e80f       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e                                                                                         2 minutes ago       Running             kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4                                                                                         2 minutes ago       Running             kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	2848cdc0e8c15       045733566833c                                                                                         2 minutes ago       Running             kube-controller-manager   2                   76a884a77895b       kube-controller-manager-ha-373000
	ebe78e53d91d8       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06                                                                                         3 minutes ago       Running             etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0                                                                                         3 minutes ago       Running             kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	8d1b9f96928b6       604f5db92eaa8                                                                                         3 minutes ago       Running             kube-apiserver            1                   c3ec38b5b8b88       kube-apiserver-ha-373000
	91e90de8fe34f       045733566833c                                                                                         3 minutes ago       Exited              kube-controller-manager   1                   76a884a77895b       kube-controller-manager-ha-373000
	e4c8538956c47       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago       Exited              busybox                   0                   d600143e2a2b0       busybox-7dff88458-hdg8r
	a183c1159f971       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   ad6105cce447d       coredns-6f6b679f8f-hv98f
	aa4d1e9b3fb56       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   238410437a3ad       coredns-6f6b679f8f-rcfmc
	0d55a0eeb67f5       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago       Exited              kindnet-cni               0                   462acdf375c7b       kindnet-k4c4p
	8493354682ea9       ad83b2ca7b09e                                                                                         9 minutes ago       Exited              kube-proxy                0                   4ea29595ff287       kube-proxy-2xkhp
	da35cb184d7df       604f5db92eaa8                                                                                         9 minutes ago       Exited              kube-apiserver            0                   af987f19793c3       kube-apiserver-ha-373000
	311485d219660       2e96e5913fc06                                                                                         9 minutes ago       Exited              etcd                      0                   7a32c93f32a9c       etcd-ha-373000
	807d80bec4e45       1766f54c897f0                                                                                         9 minutes ago       Exited              kube-scheduler            0                   26832128bdd4d       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a183c1159f97] <==
	[INFO] 10.244.0.4:47320 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000489917s
	[INFO] 10.244.2.2:54669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012678s
	[INFO] 10.244.1.2:43705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009229s
	[INFO] 10.244.1.2:54355 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011165041s
	[INFO] 10.244.1.2:33518 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010428983s
	[INFO] 10.244.0.4:45605 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000084046s
	[INFO] 10.244.0.4:50628 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145592s
	[INFO] 10.244.0.4:33161 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126048s
	[INFO] 10.244.2.2:37518 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000121734s
	[INFO] 10.244.2.2:58873 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101169s
	[INFO] 10.244.2.2:50099 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070705s
	[INFO] 10.244.1.2:54977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124537s
	[INFO] 10.244.1.2:43577 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073582s
	[INFO] 10.244.0.4:46803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000059521s
	[INFO] 10.244.0.4:59171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000040726s
	[INFO] 10.244.2.2:39966 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105714s
	[INFO] 10.244.2.2:51946 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007014s
	[INFO] 10.244.1.2:51245 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084678s
	[INFO] 10.244.1.2:40537 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000069547s
	[INFO] 10.244.0.4:36306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081884s
	[INFO] 10.244.2.2:41973 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000065341s
	[INFO] 10.244.2.2:57971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083126s
	[INFO] 10.244.2.2:43658 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000062409s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa4d1e9b3fb5] <==
	[INFO] 10.244.1.2:45157 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149452s
	[INFO] 10.244.0.4:36007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109468s
	[INFO] 10.244.0.4:38953 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110641s
	[INFO] 10.244.0.4:41701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000625324s
	[INFO] 10.244.0.4:54986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090319s
	[INFO] 10.244.0.4:44918 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046498s
	[INFO] 10.244.2.2:55873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125265s
	[INFO] 10.244.2.2:36969 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000098884s
	[INFO] 10.244.2.2:37588 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000041509s
	[INFO] 10.244.2.2:39779 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000038465s
	[INFO] 10.244.2.2:58973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072931s
	[INFO] 10.244.1.2:46606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012235s
	[INFO] 10.244.1.2:55528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000071719s
	[INFO] 10.244.0.4:43575 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034973s
	[INFO] 10.244.0.4:55874 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073156s
	[INFO] 10.244.2.2:45694 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050926s
	[INFO] 10.244.2.2:37999 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097175s
	[INFO] 10.244.1.2:39004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108935s
	[INFO] 10.244.1.2:45716 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148013s
	[INFO] 10.244.0.4:40729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079121s
	[INFO] 10.244.0.4:38794 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057287s
	[INFO] 10.244.0.4:48660 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000119698s
	[INFO] 10.244.2.2:38231 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048609s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-373000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T11_59_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:59:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:08:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 18:59:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:05:28 +0000   Sun, 18 Aug 2024 19:05:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-373000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 be8c970205d64d6f8d4700f55fd439c4
	  System UUID:                2f6e4f9b-0000-0000-8f55-d5f48a14c3df
	  Boot ID:                    bfd69bae-ba72-43fb-b7a0-1130a86ddec9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hdg8r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 coredns-6f6b679f8f-hv98f             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m1s
	  kube-system                 coredns-6f6b679f8f-rcfmc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m1s
	  kube-system                 etcd-ha-373000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m6s
	  kube-system                 kindnet-k4c4p                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m2s
	  kube-system                 kube-apiserver-ha-373000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-controller-manager-ha-373000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-proxy-2xkhp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-scheduler-ha-373000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 kube-vip-ha-373000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m                     kube-proxy       
	  Normal  Starting                 2m49s                  kube-proxy       
	  Normal  Starting                 9m6s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m6s                   kubelet          Node ha-373000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s                   kubelet          Node ha-373000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s                   kubelet          Node ha-373000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m3s                   node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  NodeReady                8m44s                  kubelet          Node ha-373000 status is now: NodeReady
	  Normal  RegisteredNode           7m55s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           6m42s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node ha-373000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node ha-373000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node ha-373000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	  Normal  RegisteredNode           106s                   node-controller  Node ha-373000 event: Registered Node ha-373000 in Controller
	
	
	Name:               ha-373000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T12_00_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:05:17 +0000   Sun, 18 Aug 2024 19:00:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-373000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1dd1525a71ed4e34854a0717875d7974
	  System UUID:                7a234b98-0000-0000-a476-83254bfde967
	  Boot ID:                    4f102243-3831-4b64-8d3d-63e4676f5c43
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-85gjs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 etcd-ha-373000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m
	  kube-system                 kindnet-q7ghp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m3s
	  kube-system                 kube-apiserver-ha-373000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-controller-manager-ha-373000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-proxy-5hg88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-ha-373000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-vip-ha-373000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m13s                  kube-proxy       
	  Normal   Starting                 4m43s                  kube-proxy       
	  Normal   Starting                 7m58s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  8m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m3s (x8 over 8m3s)    kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m3s (x8 over 8m3s)    kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m3s (x7 over 8m3s)    kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m58s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           6m42s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   Starting                 4m46s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m46s                  kubelet          Node ha-373000-m02 has been rebooted, boot id: cadd9b91-3eb1-4a50-944d-943942f3c889
	  Normal   NodeHasSufficientPID     4m46s                  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m46s                  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m46s                  kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   Starting                 3m23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m23s (x8 over 3m23s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m23s (x8 over 3m23s)  kubelet          Node ha-373000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m23s (x7 over 3m23s)  kubelet          Node ha-373000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           2m48s                  node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	  Normal   RegisteredNode           106s                   node-controller  Node ha-373000-m02 event: Registered Node ha-373000-m02 in Controller
	
	
	Name:               ha-373000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-373000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-373000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T12_02_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:02:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-373000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:04:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 19:03:09 +0000   Sun, 18 Aug 2024 19:06:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-373000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb7e1fc529fe4c47b8d9b40f3d1984a6
	  System UUID:                4216427c-0000-0000-8c2b-c85701e511a2
	  Boot ID:                    ffa27ed1-34bc-4ad1-a52d-6b7cdfc1b588
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2gf5h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m54s
	  kube-system                 kube-proxy-l7zlx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m55s)  kubelet          Node ha-373000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m54s (x2 over 5m55s)  kubelet          Node ha-373000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m54s (x2 over 5m55s)  kubelet          Node ha-373000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           5m52s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           5m50s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  NodeReady                5m31s                  kubelet          Node ha-373000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	  Normal  NodeNotReady             2m30s                  node-controller  Node ha-373000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           106s                   node-controller  Node ha-373000-m04 event: Registered Node ha-373000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035772] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008033] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.653667] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006467] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.700574] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.244282] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.513719] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.099510] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.960582] systemd-fstab-generator[1091]: Ignoring "noauto" option for root device
	[  +0.269134] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.056589] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.053681] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.111453] systemd-fstab-generator[1152]: Ignoring "noauto" option for root device
	[  +2.427542] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.102430] systemd-fstab-generator[1385]: Ignoring "noauto" option for root device
	[  +0.104882] systemd-fstab-generator[1397]: Ignoring "noauto" option for root device
	[  +0.113938] systemd-fstab-generator[1412]: Ignoring "noauto" option for root device
	[  +0.439463] systemd-fstab-generator[1572]: Ignoring "noauto" option for root device
	[  +6.887746] kauditd_printk_skb: 212 callbacks suppressed
	[Aug18 19:05] kauditd_printk_skb: 40 callbacks suppressed
	[Aug18 19:06] kauditd_printk_skb: 85 callbacks suppressed
	
	
	==> etcd [311485d21966] <==
	2024/08/18 19:04:24 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:04:24.294478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.811599644s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:04:24.297217Z","caller":"traceutil/trace.go:171","msg":"trace[1222647709] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; }","duration":"5.81433944s","start":"2024-08-18T19:04:18.482872Z","end":"2024-08-18T19:04:24.297212Z","steps":["trace[1222647709] 'agreement among raft nodes before linearized reading'  (duration: 5.811599988s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:04:24.297250Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:04:18.482838Z","time spent":"5.814390017s","remote":"127.0.0.1:50520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":86,"response count":0,"response size":0,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true "}
	2024/08/18 19:04:24 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:04:24.343451Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:04:24.343477Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:04:24.343545Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:04:24.344459Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344477Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344491Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344598Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344719Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344731Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:04:24.344736Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.344743Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.344755Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345718Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345771Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345797Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.345806Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:04:24.349335Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:04:24.349441Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:04:24.349451Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:06:37.717046Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3dc5de516363476c","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-18T19:06:40.938257Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.947603Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.949815Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:40.996821Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3dc5de516363476c","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-18T19:06:40.996896Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:06:41.079829Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3dc5de516363476c","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-18T19:06:41.079911Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.609427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(7229224765318730190 13314548521573537860)"}
	{"level":"info","ts":"2024-08-18T19:08:27.610199Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"3dc5de516363476c","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"info","ts":"2024-08-18T19:08:27.610266Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.610807Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.610849Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.611427Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.611465Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.611490Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.612005Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c","error":"context canceled"}
	{"level":"warn","ts":"2024-08-18T19:08:27.612203Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3dc5de516363476c","error":"failed to read 3dc5de516363476c on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-18T19:08:27.612244Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.612681Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:27.613023Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.613057Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3dc5de516363476c"}
	{"level":"info","ts":"2024-08-18T19:08:27.613068Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.618633Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"3dc5de516363476c"}
	{"level":"warn","ts":"2024-08-18T19:08:27.620988Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"3dc5de516363476c"}
	
	
	==> kernel <==
	 19:08:33 up 4 min,  0 users,  load average: 0.19, 0.26, 0.11
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0d55a0eeb67f] <==
	I0818 19:03:45.806841       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:55.803194       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:03:55.803360       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:55.803667       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:03:55.803856       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:55.804295       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:03:55.804448       1 main.go:299] handling current node
	I0818 19:03:55.804713       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:03:55.804905       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:05.803401       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:04:05.803664       1 main.go:299] handling current node
	I0818 19:04:05.803807       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:04:05.804018       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:05.804411       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:04:05.804569       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:04:05.804783       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:04:05.804917       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:04:15.811869       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:04:15.811960       1 main.go:299] handling current node
	I0818 19:04:15.811993       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:04:15.811999       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:04:15.812247       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:04:15.812278       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:04:15.812456       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:04:15.812775       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:08:03.321711       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:03.321913       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:03.321961       1 main.go:299] handling current node
	I0818 19:08:13.321058       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:13.321081       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:13.321201       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318236       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:23.318272       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318358       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:23.318384       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:23.318431       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:23.318455       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:23.318492       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:23.318516       1 main.go:299] handling current node
	I0818 19:08:33.318121       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:33.318160       1 main.go:299] handling current node
	I0818 19:08:33.318171       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:33.318175       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:33.318256       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:33.318261       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d1b9f96928b] <==
	I0818 19:05:17.645616       1 controller.go:90] Starting OpenAPI V3 controller
	I0818 19:05:17.645726       1 naming_controller.go:294] Starting NamingConditionController
	I0818 19:05:17.646051       1 establishing_controller.go:81] Starting EstablishingController
	I0818 19:05:17.646230       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0818 19:05:17.646334       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0818 19:05:17.646408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0818 19:05:17.726468       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:05:17.726550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:05:17.726964       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:05:17.727055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:05:17.734198       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:05:17.740843       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:05:17.741048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:05:17.741237       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:05:17.741571       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:05:17.741772       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:05:17.741794       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:05:17.741804       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:05:17.741815       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:05:17.751519       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0818 19:05:17.755144       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:05:17.765153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:05:17.765474       1 policy_source.go:224] refreshing policies
	I0818 19:05:17.804728       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:05:18.643866       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	
	==> kube-apiserver [da35cb184d7d] <==
	W0818 19:04:25.349322       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349369       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349419       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349531       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349595       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349641       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349733       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349810       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349834       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349904       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349973       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350044       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350113       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350180       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350212       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349915       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349988       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350066       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349814       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350182       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350401       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350426       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350445       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.350464       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:04:25.349746       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2848cdc0e8c1] <==
	I0818 19:06:21.735277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="12.863247ms"
	I0818 19:06:21.735967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="81.466µs"
	I0818 19:06:21.746428       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-ctkgn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-ctkgn\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 19:06:21.747565       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"313f6603-3b63-4bf8-b340-97d07580eb36", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-ctkgn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-ctkgn": the object has been modified; please apply your changes to the latest version and try again
	I0818 19:06:39.703806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:39.715043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:40.697923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.311µs"
	I0818 19:06:42.536516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:06:43.415265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.489084ms"
	I0818 19:06:43.415383       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.297µs"
	I0818 19:06:46.960496       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:06:47.058027       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m04"
	I0818 19:08:24.351739       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:08:24.370425       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	I0818 19:08:24.433470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.047216ms"
	I0818 19:08:24.462185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.62183ms"
	I0818 19:08:24.475969       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.735467ms"
	I0818 19:08:24.476093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.391µs"
	I0818 19:08:24.476464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.242µs"
	I0818 19:08:24.499947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.811259ms"
	I0818 19:08:24.500332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="88.135µs"
	I0818 19:08:26.533800       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.004µs"
	I0818 19:08:27.195159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="35.662µs"
	I0818 19:08:27.197677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.602µs"
	I0818 19:08:28.364380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-373000-m03"
	
	
	==> kube-controller-manager [91e90de8fe34] <==
	I0818 19:04:58.710503       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:04:58.973279       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:04:58.973364       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:04:58.975212       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:04:58.975486       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:04:58.975569       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:04:58.976177       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:05:18.981143       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8493354682ea] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:59:31.957366       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:59:31.964975       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 18:59:31.965035       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:59:31.993827       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:59:31.993880       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:59:31.993899       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:59:31.995999       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:59:31.996318       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:59:31.996347       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:59:31.997862       1 config.go:197] "Starting service config controller"
	I0818 18:59:31.997906       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:59:31.997955       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:59:31.997983       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:59:31.998642       1 config.go:326] "Starting node config controller"
	I0818 18:59:31.998670       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:59:32.098501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:59:32.098515       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:59:32.098852       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [807d80bec4e4] <==
	W0818 18:59:24.091285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 18:59:24.091378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:59:24.208309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:59:24.208477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0818 18:59:26.678728       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:02:38.714766       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l8txd\": pod kube-proxy-l8txd is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l8txd" node="ha-373000-m04"
	E0818 19:02:38.714954       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l8txd\": pod kube-proxy-l8txd is already assigned to node \"ha-373000-m04\"" pod="kube-system/kube-proxy-l8txd"
	I0818 19:02:38.715132       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l8txd" node="ha-373000-m04"
	E0818 19:02:38.714987       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2gf5h\": pod kindnet-2gf5h is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2gf5h" node="ha-373000-m04"
	E0818 19:02:38.715342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ff15d17a-fb96-4721-847f-13f5c0e2613a(kube-system/kindnet-2gf5h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2gf5h"
	E0818 19:02:38.715353       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2gf5h\": pod kindnet-2gf5h is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-2gf5h"
	I0818 19:02:38.715361       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2gf5h" node="ha-373000-m04"
	E0818 19:02:38.735628       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l7zlx\": pod kube-proxy-l7zlx is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l7zlx" node="ha-373000-m04"
	E0818 19:02:38.735683       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 853afdf8-598a-435c-8c48-233287580493(kube-system/kube-proxy-l7zlx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-l7zlx"
	E0818 19:02:38.735697       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l7zlx\": pod kube-proxy-l7zlx is already assigned to node \"ha-373000-m04\"" pod="kube-system/kube-proxy-l7zlx"
	I0818 19:02:38.735708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l7zlx" node="ha-373000-m04"
	E0818 19:02:38.736591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kg6jv\": pod kindnet-kg6jv is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kg6jv" node="ha-373000-m04"
	E0818 19:02:38.736671       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 50964441-c762-4c22-8fd9-c3695b7291c5(kube-system/kindnet-kg6jv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kg6jv"
	E0818 19:02:38.736686       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kg6jv\": pod kindnet-kg6jv is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-kg6jv"
	I0818 19:02:38.736699       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kg6jv" node="ha-373000-m04"
	E0818 19:02:38.759152       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6g6fs\": pod kindnet-6g6fs is already assigned to node \"ha-373000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6g6fs" node="ha-373000-m04"
	E0818 19:02:38.759208       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ccae47d9-4f47-4a8e-9ff1-9c3acf42d3cb(kube-system/kindnet-6g6fs) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6g6fs"
	E0818 19:02:38.759220       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6g6fs\": pod kindnet-6g6fs is already assigned to node \"ha-373000-m04\"" pod="kube-system/kindnet-6g6fs"
	I0818 19:02:38.759449       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6g6fs" node="ha-373000-m04"
	E0818 19:04:24.272540       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.379949    1579 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224201eecaa62b4ed09e6764c91ef4dc" path="/var/lib/kubelet/pods/224201eecaa62b4ed09e6764c91ef4dc/volumes"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.615140    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb4ed9664dda977cc9b021fafae44e8ee00272a594ba9ddcb993b4d0d5f0db6f"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.759496    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1f2fb60f7c58ea2a794ed7b3890a722b7e02d695c8b7d8be84e17d817f22ff"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.771290    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3772c138aa65e84e76733835788a3b5c8c0f94bde29eaad82c89e1b944ad3bff"
	Aug 18 19:05:42 ha-373000 kubelet[1579]: I0818 19:05:42.799548    1579 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfce6a3dd1783a1665494aa3c9f1676c1fd42788d0dfa87d2196b81b8622522e"
	Aug 18 19:05:50 ha-373000 kubelet[1579]: E0818 19:05:50.390879    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:05:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:05:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:05:50 ha-373000 kubelet[1579]: I0818 19:05:50.412291    1579 scope.go:117] "RemoveContainer" containerID="f806e8fda7ac0424ec5809ee1d3490000910e1bcde902d636000fbe7c1a0ad14"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: I0818 19:06:13.151959    1579 scope.go:117] "RemoveContainer" containerID="6ea2d724255aeefc72019808f3a7cf3353706c1aaf09c7f80d3aa13d2a2db8b7"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: I0818 19:06:13.152170    1579 scope.go:117] "RemoveContainer" containerID="b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b"
	Aug 18 19:06:13 ha-373000 kubelet[1579]: E0818 19:06:13.152254    1579 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(aa9c6f5d-6c1e-4901-83bb-62bc420ea044)\"" pod="kube-system/storage-provisioner" podUID="aa9c6f5d-6c1e-4901-83bb-62bc420ea044"
	Aug 18 19:06:27 ha-373000 kubelet[1579]: I0818 19:06:27.368674    1579 scope.go:117] "RemoveContainer" containerID="b857c2fef140c7ff17e7936e8ecc11703f749579876ae1a0c9996a26e5a9242b"
	Aug 18 19:06:50 ha-373000 kubelet[1579]: E0818 19:06:50.388174    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:06:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:06:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:07:50 ha-373000 kubelet[1579]: E0818 19:07:50.390057    1579 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:07:50 ha-373000 kubelet[1579]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:07:50 ha-373000 kubelet[1579]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-373000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-mdsvq
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-373000 describe pod busybox-7dff88458-mdsvq
helpers_test.go:282: (dbg) kubectl --context ha-373000 describe pod busybox-7dff88458-mdsvq:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-mdsvq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxj42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-dxj42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9s (x2 over 11s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-373000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-373000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m36.219612468s)

                                                
                                                
-- stdout --
	* [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	* Enabled addons: 
	
	* Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	* Restarting existing hyperkit VM for "ha-373000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:09:00.388954    3976 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.389224    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389230    3976 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.389234    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389403    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.390788    3976 out.go:352] Setting JSON to false
	I0818 12:09:00.412980    3976 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2311,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:09:00.413073    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:09:00.435491    3976 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:09:00.478012    3976 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:09:00.478041    3976 notify.go:220] Checking for updates...
	I0818 12:09:00.520842    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:00.541902    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:09:00.562974    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:09:00.583978    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:09:00.604937    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:09:00.626633    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.627309    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.627392    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.636929    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0818 12:09:00.637287    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.637735    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.637744    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.637948    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.638063    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.638277    3976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:09:00.638525    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.638545    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.646880    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0818 12:09:00.647224    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.647595    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.647613    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.647826    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.647950    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.676977    3976 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:09:00.718931    3976 start.go:297] selected driver: hyperkit
	I0818 12:09:00.718961    3976 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.719183    3976 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:09:00.719386    3976 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.719595    3976 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:09:00.729307    3976 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:09:00.733175    3976 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.733199    3976 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:09:00.735834    3976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:09:00.735880    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:00.735888    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:00.735960    3976 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.736064    3976 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.757023    3976 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:09:00.777783    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:00.777901    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:09:00.777924    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:00.778128    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:00.778148    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:00.778333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.779289    3976 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:00.779483    3976 start.go:364] duration metric: took 143.76µs to acquireMachinesLock for "ha-373000"
	I0818 12:09:00.779521    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:00.779537    3976 fix.go:54] fixHost starting: 
	I0818 12:09:00.779956    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.779984    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.789309    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0818 12:09:00.789666    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.790031    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.790040    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.790251    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.790366    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.790468    3976 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.790556    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.790639    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.791548    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.791593    3976 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:09:00.791619    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:09:00.791703    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:00.833742    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:09:00.854617    3976 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:09:00.854890    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.854917    3976 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:09:00.856657    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.856693    3976 main.go:141] libmachine: (ha-373000) DBG | pid 3836 is in state "Stopped"
	I0818 12:09:00.856718    3976 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:09:00.856984    3976 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:09:00.989123    3976 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:09:00.989174    3976 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:00.989237    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989280    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989323    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:00.989366    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:00.989381    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:00.990799    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Pid is 3990
	I0818 12:09:00.991176    3976 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:09:00.991196    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.991218    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:09:00.993000    3976 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:09:00.993068    3976 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:00.993082    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:00.993090    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:00.993097    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:00.993119    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:09:00.993129    3976 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:09:00.993139    3976 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:09:00.993184    3976 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:09:00.994094    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:00.994333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.994945    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:00.994967    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.995142    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:00.995271    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:00.995391    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995521    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995632    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:00.995768    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:00.996051    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:00.996062    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:00.999904    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:01.080830    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:01.081571    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.081587    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.081595    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.081604    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.460230    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:01.460268    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:01.574713    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.574755    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.574768    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.574787    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.575699    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:01.575710    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:07.163001    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:07.163029    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:07.163053    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:07.186829    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:12.062770    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:12.062784    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.062975    3976 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:09:12.062986    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.063087    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.063175    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.063280    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063371    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063480    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.063605    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.063750    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.063759    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:09:12.131801    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:09:12.131819    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.131954    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.132061    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132144    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132224    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.132376    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.132528    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.132546    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:12.199331    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:12.199349    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:12.199369    3976 buildroot.go:174] setting up certificates
	I0818 12:09:12.199383    3976 provision.go:84] configureAuth start
	I0818 12:09:12.199391    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.199540    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:12.199634    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.199719    3976 provision.go:143] copyHostCerts
	I0818 12:09:12.199749    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199819    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:12.199828    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199960    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:12.200176    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200222    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:12.200227    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200306    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:12.200461    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200505    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:12.200509    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200584    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:12.200731    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:09:12.289022    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:12.289076    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:12.289091    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.289227    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.289322    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.289416    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.289508    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:12.325856    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:12.325929    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:12.345953    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:12.346012    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:09:12.366027    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:12.366092    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:09:12.386212    3976 provision.go:87] duration metric: took 186.823558ms to configureAuth
	I0818 12:09:12.386225    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:12.386405    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:12.386418    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:12.386551    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.386643    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.386731    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386817    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386909    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.387025    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.387159    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.387167    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:12.445833    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:12.445851    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:12.445930    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:12.445943    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.446067    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.446173    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446279    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446389    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.446543    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.446679    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.446725    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:12.516077    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:12.516100    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.516233    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.516348    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516437    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516526    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.516667    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.516813    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.516825    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:14.219167    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:14.219181    3976 machine.go:96] duration metric: took 13.22463913s to provisionDockerMachine
	I0818 12:09:14.219193    3976 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:09:14.219201    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:14.219211    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.219390    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:14.219417    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.219519    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.219630    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.219724    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.219808    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.259561    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:14.263959    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:14.263976    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:14.264080    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:14.264273    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:14.264280    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:14.264487    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:14.272283    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:14.302942    3976 start.go:296] duration metric: took 83.742133ms for postStartSetup
	I0818 12:09:14.302965    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.303146    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:14.303160    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.303248    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.303361    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.303436    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.303526    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.338080    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:14.338142    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:14.391638    3976 fix.go:56] duration metric: took 13.612527396s for fixHost
	I0818 12:09:14.391662    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.391810    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.391899    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.391991    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.392074    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.392222    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:14.392364    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:14.392372    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:14.449746    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008154.620949792
	
	I0818 12:09:14.449760    3976 fix.go:216] guest clock: 1724008154.620949792
	I0818 12:09:14.449772    3976 fix.go:229] Guest: 2024-08-18 12:09:14.620949792 -0700 PDT Remote: 2024-08-18 12:09:14.391652 -0700 PDT m=+14.038170292 (delta=229.297792ms)
	I0818 12:09:14.449789    3976 fix.go:200] guest clock delta is within tolerance: 229.297792ms
	I0818 12:09:14.449793    3976 start.go:83] releasing machines lock for "ha-373000", held for 13.670724274s
	I0818 12:09:14.449812    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.449942    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:14.450037    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450349    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450474    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450548    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:14.450580    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450637    3976 ssh_runner.go:195] Run: cat /version.json
	I0818 12:09:14.450648    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450688    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450746    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450782    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450836    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450854    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450935    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450952    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.451045    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.541757    3976 ssh_runner.go:195] Run: systemctl --version
	I0818 12:09:14.546793    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:09:14.550801    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:14.550839    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:14.564129    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:14.564141    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.564243    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.581664    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:14.590425    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:14.599077    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:14.599120    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:14.607868    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.616526    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:14.625074    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.633725    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:14.642461    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:14.651030    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:14.659717    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:14.668509    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:14.676419    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:14.684357    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:14.777696    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:14.795379    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.795465    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:14.808091    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.819351    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:14.834858    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.845068    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.855088    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:14.879151    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.889782    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.904555    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:14.907616    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:14.914893    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:14.928498    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:15.021302    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:15.126534    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:15.126611    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:15.141437    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:15.238491    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:09:17.633635    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.395193434s)
	I0818 12:09:17.633701    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:09:17.644119    3976 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:09:17.657413    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.668074    3976 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:09:17.762478    3976 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:09:17.858367    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:17.948600    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:09:17.962148    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.972120    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.070649    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:09:18.132791    3976 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:09:18.132869    3976 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:09:18.137140    3976 start.go:563] Will wait 60s for crictl version
	I0818 12:09:18.137200    3976 ssh_runner.go:195] Run: which crictl
	I0818 12:09:18.140608    3976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:09:18.167352    3976 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:09:18.167422    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.186476    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.224169    3976 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:09:18.224214    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:18.224595    3976 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:09:18.229086    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.238631    3976 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:09:18.238717    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:18.238780    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.252546    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.252557    3976 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:09:18.252627    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.266684    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.266703    3976 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:09:18.266713    3976 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:09:18.266790    3976 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:09:18.266861    3976 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:09:18.304192    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:18.304204    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:18.304213    3976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:09:18.304229    3976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:09:18.304320    3976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:09:18.304334    3976 kube-vip.go:115] generating kube-vip config ...
	I0818 12:09:18.304382    3976 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:09:18.316732    3976 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:09:18.316793    3976 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:09:18.316840    3976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:09:18.324597    3976 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:09:18.324641    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:09:18.331779    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:09:18.345158    3976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:09:18.358298    3976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:09:18.372286    3976 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:09:18.385485    3976 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:09:18.388341    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.397526    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.496612    3976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:09:18.511160    3976 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:09:18.511172    3976 certs.go:194] generating shared ca certs ...
	I0818 12:09:18.511184    3976 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.511356    3976 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:09:18.511436    3976 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:09:18.511446    3976 certs.go:256] generating profile certs ...
	I0818 12:09:18.511538    3976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:09:18.511564    3976 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69
	I0818 12:09:18.511579    3976 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0818 12:09:18.678090    3976 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 ...
	I0818 12:09:18.678108    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69: {Name:mk412ce60d50ec37c24febde03f7225e8a48a24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678466    3976 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 ...
	I0818 12:09:18.678480    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69: {Name:mke31239238122280f7cbf00316b2acd43533e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678743    3976 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:09:18.678987    3976 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:09:18.679293    3976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:09:18.679306    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:09:18.679332    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:09:18.679353    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:09:18.679374    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:09:18.679394    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:09:18.679414    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:09:18.679441    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:09:18.679462    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:09:18.679567    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:09:18.679618    3976 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:09:18.679629    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:09:18.679662    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:09:18.679695    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:09:18.679735    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:09:18.679815    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:18.679851    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:09:18.679895    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:18.679917    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:09:18.680416    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:09:18.731491    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:09:18.777149    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:09:18.836957    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:09:18.879727    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:09:18.904838    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:09:18.933787    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:09:18.969389    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:09:18.994753    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:09:19.013849    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:09:19.033471    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:09:19.052595    3976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:09:19.066128    3976 ssh_runner.go:195] Run: openssl version
	I0818 12:09:19.070271    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:09:19.079228    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082728    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082763    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.086877    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:09:19.095804    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:09:19.104889    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108208    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108241    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.112406    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:09:19.121720    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:09:19.130845    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134345    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134389    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.138941    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:09:19.148376    3976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:09:19.151715    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:09:19.155985    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:09:19.160273    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:09:19.165064    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:09:19.169962    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:09:19.174244    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:09:19.178473    3976 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:19.178593    3976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:09:19.190838    3976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:09:19.199172    3976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:09:19.199186    3976 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:09:19.199227    3976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:09:19.207402    3976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:09:19.207710    3976 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.207791    3976 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:09:19.207967    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.208584    3976 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.208770    3976 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x52acf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:09:19.209064    3976 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:09:19.209255    3976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:09:19.217108    3976 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:09:19.217125    3976 kubeadm.go:597] duration metric: took 17.934031ms to restartPrimaryControlPlane
	I0818 12:09:19.217132    3976 kubeadm.go:394] duration metric: took 38.665023ms to StartCluster
	I0818 12:09:19.217145    3976 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217216    3976 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.217617    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217869    3976 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:09:19.217886    3976 start.go:241] waiting for startup goroutines ...
	I0818 12:09:19.217906    3976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:09:19.217983    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.263234    3976 out.go:177] * Enabled addons: 
	I0818 12:09:19.284302    3976 addons.go:510] duration metric: took 66.388858ms for enable addons: enabled=[]
	I0818 12:09:19.284387    3976 start.go:246] waiting for cluster config update ...
	I0818 12:09:19.284400    3976 start.go:255] writing updated cluster config ...
	I0818 12:09:19.306484    3976 out.go:201] 
	I0818 12:09:19.327608    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.327742    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.350369    3976 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:09:19.392104    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:19.392164    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:19.392336    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:19.392355    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:19.392486    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.393415    3976 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:19.393522    3976 start.go:364] duration metric: took 80.918µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:09:19.393546    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:19.393556    3976 fix.go:54] fixHost starting: m02
	I0818 12:09:19.393965    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:19.393990    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:19.403655    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0818 12:09:19.404217    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:19.404634    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:19.404650    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:19.405004    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:19.405118    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.405222    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:19.405303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.405380    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:19.406287    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.406302    3976 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:09:19.406312    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	W0818 12:09:19.406463    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:19.448356    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:09:19.469229    3976 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:09:19.469501    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.469542    3976 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:09:19.471314    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.471327    3976 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3847 is in state "Stopped"
	I0818 12:09:19.471351    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:09:19.471584    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:09:19.500704    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:09:19.500730    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:19.500855    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500929    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:19.500977    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:19.500998    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:19.502361    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Pid is 3997
	I0818 12:09:19.502828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:09:19.502885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.502920    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:09:19.504725    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:09:19.504780    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:19.504828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:09:19.504848    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:19.504870    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:19.504882    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:19.504895    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:09:19.504900    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:09:19.504907    3976 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:09:19.505665    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:19.505858    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.506316    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:19.506328    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.506474    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:19.506602    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:19.506707    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506790    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506894    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:19.507039    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:19.507197    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:19.507205    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:19.510551    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:19.519215    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:19.520168    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:19.520203    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:19.520228    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:19.520254    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:19.902342    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:19.902357    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:20.017440    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:20.017463    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:20.017471    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:20.017477    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:20.018303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:20.018315    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:25.632462    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:25.632549    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:25.632559    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:25.657887    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:28.954523    3976 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0818 12:09:32.012675    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:32.012690    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012844    3976 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:09:32.012857    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012969    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.013100    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.013206    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013295    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013399    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.013577    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.013797    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.013807    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:09:32.083655    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:09:32.083671    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.083802    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.083888    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.083968    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.084051    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.084177    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.084328    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.084343    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:32.145743    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:32.145757    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:32.145771    3976 buildroot.go:174] setting up certificates
	I0818 12:09:32.145778    3976 provision.go:84] configureAuth start
	I0818 12:09:32.145785    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.145913    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:32.146013    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.146119    3976 provision.go:143] copyHostCerts
	I0818 12:09:32.146155    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146207    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:32.146213    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146346    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:32.146563    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146599    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:32.146604    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146673    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:32.146816    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146847    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:32.146852    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146916    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:32.147063    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:09:32.439235    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:32.439288    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:32.439303    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.439451    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.439555    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.439662    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.439767    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:32.473899    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:32.473971    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:32.492902    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:32.492977    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:09:32.512205    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:32.512269    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 12:09:32.531269    3976 provision.go:87] duration metric: took 385.496037ms to configureAuth
	I0818 12:09:32.531282    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:32.531440    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:32.531454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:32.531586    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.531687    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.531797    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531905    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531985    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.532087    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.532212    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.532220    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:32.586134    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:32.586145    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:32.586228    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:32.586239    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.586366    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.586454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586566    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586649    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.586801    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.586940    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.586986    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:32.654663    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:32.654688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.654820    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.654904    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.654974    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.655053    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.655180    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.655330    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.655343    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:34.321102    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:34.321115    3976 machine.go:96] duration metric: took 14.8152512s to provisionDockerMachine
	I0818 12:09:34.321123    3976 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:09:34.321131    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:34.321140    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.321324    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:34.321348    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.321440    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.321528    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.321619    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.321715    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.356724    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:34.363921    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:34.363935    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:34.364038    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:34.364185    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:34.364192    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:34.364347    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:34.379409    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:34.407459    3976 start.go:296] duration metric: took 86.328927ms for postStartSetup
	I0818 12:09:34.407481    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.407638    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:34.407658    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.407738    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.407823    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.407908    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.407985    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.441305    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:34.441365    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:34.475756    3976 fix.go:56] duration metric: took 15.082665832s for fixHost
	I0818 12:09:34.475780    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.475917    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.476014    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476109    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476204    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.476334    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:34.476475    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:34.476483    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:34.531245    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008174.705830135
	
	I0818 12:09:34.531256    3976 fix.go:216] guest clock: 1724008174.705830135
	I0818 12:09:34.531265    3976 fix.go:229] Guest: 2024-08-18 12:09:34.705830135 -0700 PDT Remote: 2024-08-18 12:09:34.475769 -0700 PDT m=+34.122913514 (delta=230.061135ms)
	I0818 12:09:34.531276    3976 fix.go:200] guest clock delta is within tolerance: 230.061135ms
	I0818 12:09:34.531281    3976 start.go:83] releasing machines lock for "ha-373000-m02", held for 15.138221498s
	I0818 12:09:34.531298    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.531428    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:34.555141    3976 out.go:177] * Found network options:
	I0818 12:09:34.576875    3976 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:09:34.597784    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.597830    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598932    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.599031    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:34.599086    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:09:34.599150    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.599257    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:09:34.599278    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.599308    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599482    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599521    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599684    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599720    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599871    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599921    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.600032    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:09:34.631739    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:34.631799    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:34.677593    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:34.677615    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.677737    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:34.693773    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:34.702951    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:34.711799    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:34.711840    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:34.720906    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.729957    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:34.738902    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.747932    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:34.757312    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:34.766375    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:34.775307    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:34.784400    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:34.792630    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:34.801021    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:34.911872    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:34.930682    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.930753    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:34.944697    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.956782    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:34.974233    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.986114    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:34.998297    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:35.018378    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:35.029759    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:35.044553    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:35.047654    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:35.055897    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:35.069339    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:35.163048    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:35.263866    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:35.263888    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:35.281642    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:35.375004    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:10:36.400829    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.027707476s)
	I0818 12:10:36.400907    3976 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:10:36.437434    3976 out.go:201] 
	W0818 12:10:36.459246    3976 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:10:36.459348    3976 out.go:270] * 
	* 
	W0818 12:10:36.460605    3976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:10:36.503171    3976 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-373000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000: exit status 2 (148.298567ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (2.186713606s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	| node    | ha-373000 node delete m03 -v=7                                                                                               | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-373000 stop -v=7                                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:09 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true                                                                                                     | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:09 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:09:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:09:00.388954    3976 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.389224    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389230    3976 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.389234    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389403    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.390788    3976 out.go:352] Setting JSON to false
	I0818 12:09:00.412980    3976 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2311,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:09:00.413073    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:09:00.435491    3976 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:09:00.478012    3976 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:09:00.478041    3976 notify.go:220] Checking for updates...
	I0818 12:09:00.520842    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:00.541902    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:09:00.562974    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:09:00.583978    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:09:00.604937    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:09:00.626633    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.627309    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.627392    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.636929    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0818 12:09:00.637287    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.637735    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.637744    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.637948    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.638063    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.638277    3976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:09:00.638525    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.638545    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.646880    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0818 12:09:00.647224    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.647595    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.647613    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.647826    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.647950    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.676977    3976 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:09:00.718931    3976 start.go:297] selected driver: hyperkit
	I0818 12:09:00.718961    3976 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.719183    3976 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:09:00.719386    3976 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.719595    3976 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:09:00.729307    3976 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:09:00.733175    3976 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.733199    3976 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:09:00.735834    3976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:09:00.735880    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:00.735888    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:00.735960    3976 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.736064    3976 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.757023    3976 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:09:00.777783    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:00.777901    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:09:00.777924    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:00.778128    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:00.778148    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:00.778333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.779289    3976 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:00.779483    3976 start.go:364] duration metric: took 143.76µs to acquireMachinesLock for "ha-373000"
	I0818 12:09:00.779521    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:00.779537    3976 fix.go:54] fixHost starting: 
	I0818 12:09:00.779956    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.779984    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.789309    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0818 12:09:00.789666    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.790031    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.790040    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.790251    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.790366    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.790468    3976 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.790556    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.790639    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.791548    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.791593    3976 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:09:00.791619    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:09:00.791703    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:00.833742    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:09:00.854617    3976 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:09:00.854890    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.854917    3976 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:09:00.856657    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.856693    3976 main.go:141] libmachine: (ha-373000) DBG | pid 3836 is in state "Stopped"
	I0818 12:09:00.856718    3976 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:09:00.856984    3976 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:09:00.989123    3976 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:09:00.989174    3976 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:00.989237    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989280    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989323    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:00.989366    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:00.989381    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:00.990799    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Pid is 3990
	I0818 12:09:00.991176    3976 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:09:00.991196    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.991218    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:09:00.993000    3976 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:09:00.993068    3976 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:00.993082    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:00.993090    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:00.993097    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:00.993119    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:09:00.993129    3976 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:09:00.993139    3976 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:09:00.993184    3976 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:09:00.994094    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:00.994333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.994945    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:00.994967    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.995142    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:00.995271    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:00.995391    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995521    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995632    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:00.995768    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:00.996051    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:00.996062    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:00.999904    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:01.080830    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:01.081571    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.081587    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.081595    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.081604    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.460230    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:01.460268    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:01.574713    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.574755    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.574768    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.574787    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.575699    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:01.575710    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:07.163001    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:07.163029    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:07.163053    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:07.186829    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:12.062770    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:12.062784    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.062975    3976 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:09:12.062986    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.063087    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.063175    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.063280    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063371    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063480    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.063605    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.063750    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.063759    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:09:12.131801    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:09:12.131819    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.131954    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.132061    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132144    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132224    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.132376    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.132528    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.132546    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:12.199331    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:12.199349    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:12.199369    3976 buildroot.go:174] setting up certificates
	I0818 12:09:12.199383    3976 provision.go:84] configureAuth start
	I0818 12:09:12.199391    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.199540    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:12.199634    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.199719    3976 provision.go:143] copyHostCerts
	I0818 12:09:12.199749    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199819    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:12.199828    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199960    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:12.200176    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200222    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:12.200227    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200306    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:12.200461    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200505    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:12.200509    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200584    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:12.200731    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:09:12.289022    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:12.289076    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:12.289091    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.289227    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.289322    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.289416    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.289508    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:12.325856    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:12.325929    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:12.345953    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:12.346012    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:09:12.366027    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:12.366092    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:09:12.386212    3976 provision.go:87] duration metric: took 186.823558ms to configureAuth
	I0818 12:09:12.386225    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:12.386405    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:12.386418    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:12.386551    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.386643    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.386731    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386817    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386909    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.387025    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.387159    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.387167    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:12.445833    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:12.445851    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:12.445930    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:12.445943    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.446067    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.446173    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446279    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446389    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.446543    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.446679    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.446725    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:12.516077    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:12.516100    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.516233    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.516348    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516437    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516526    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.516667    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.516813    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.516825    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:14.219167    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:14.219181    3976 machine.go:96] duration metric: took 13.22463913s to provisionDockerMachine
	I0818 12:09:14.219193    3976 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:09:14.219201    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:14.219211    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.219390    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:14.219417    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.219519    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.219630    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.219724    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.219808    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.259561    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:14.263959    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:14.263976    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:14.264080    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:14.264273    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:14.264280    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:14.264487    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:14.272283    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:14.302942    3976 start.go:296] duration metric: took 83.742133ms for postStartSetup
	I0818 12:09:14.302965    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.303146    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:14.303160    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.303248    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.303361    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.303436    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.303526    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.338080    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:14.338142    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:14.391638    3976 fix.go:56] duration metric: took 13.612527396s for fixHost
	I0818 12:09:14.391662    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.391810    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.391899    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.391991    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.392074    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.392222    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:14.392364    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:14.392372    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:14.449746    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008154.620949792
	
	I0818 12:09:14.449760    3976 fix.go:216] guest clock: 1724008154.620949792
	I0818 12:09:14.449772    3976 fix.go:229] Guest: 2024-08-18 12:09:14.620949792 -0700 PDT Remote: 2024-08-18 12:09:14.391652 -0700 PDT m=+14.038170292 (delta=229.297792ms)
	I0818 12:09:14.449789    3976 fix.go:200] guest clock delta is within tolerance: 229.297792ms
	I0818 12:09:14.449793    3976 start.go:83] releasing machines lock for "ha-373000", held for 13.670724274s
	I0818 12:09:14.449812    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.449942    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:14.450037    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450349    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450474    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450548    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:14.450580    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450637    3976 ssh_runner.go:195] Run: cat /version.json
	I0818 12:09:14.450648    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450688    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450746    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450782    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450836    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450854    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450935    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450952    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.451045    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.541757    3976 ssh_runner.go:195] Run: systemctl --version
	I0818 12:09:14.546793    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:09:14.550801    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:14.550839    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:14.564129    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:14.564141    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.564243    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.581664    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:14.590425    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:14.599077    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:14.599120    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:14.607868    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.616526    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:14.625074    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.633725    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:14.642461    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:14.651030    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:14.659717    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:14.668509    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:14.676419    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:14.684357    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:14.777696    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:14.795379    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.795465    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:14.808091    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.819351    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:14.834858    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.845068    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.855088    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:14.879151    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.889782    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.904555    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:14.907616    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:14.914893    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:14.928498    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:15.021302    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:15.126534    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:15.126611    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:15.141437    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:15.238491    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:09:17.633635    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.395193434s)
	I0818 12:09:17.633701    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:09:17.644119    3976 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:09:17.657413    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.668074    3976 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:09:17.762478    3976 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:09:17.858367    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:17.948600    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:09:17.962148    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.972120    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.070649    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:09:18.132791    3976 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:09:18.132869    3976 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:09:18.137140    3976 start.go:563] Will wait 60s for crictl version
	I0818 12:09:18.137200    3976 ssh_runner.go:195] Run: which crictl
	I0818 12:09:18.140608    3976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:09:18.167352    3976 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:09:18.167422    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.186476    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.224169    3976 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:09:18.224214    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:18.224595    3976 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:09:18.229086    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.238631    3976 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:09:18.238717    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:18.238780    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.252546    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.252557    3976 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:09:18.252627    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.266684    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.266703    3976 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:09:18.266713    3976 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:09:18.266790    3976 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:09:18.266861    3976 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:09:18.304192    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:18.304204    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:18.304213    3976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:09:18.304229    3976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:09:18.304320    3976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:09:18.304334    3976 kube-vip.go:115] generating kube-vip config ...
	I0818 12:09:18.304382    3976 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:09:18.316732    3976 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:09:18.316793    3976 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:09:18.316840    3976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:09:18.324597    3976 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:09:18.324641    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:09:18.331779    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:09:18.345158    3976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:09:18.358298    3976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:09:18.372286    3976 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:09:18.385485    3976 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:09:18.388341    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.397526    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.496612    3976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:09:18.511160    3976 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:09:18.511172    3976 certs.go:194] generating shared ca certs ...
	I0818 12:09:18.511184    3976 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.511356    3976 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:09:18.511436    3976 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:09:18.511446    3976 certs.go:256] generating profile certs ...
	I0818 12:09:18.511538    3976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:09:18.511564    3976 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69
	I0818 12:09:18.511579    3976 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0818 12:09:18.678090    3976 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 ...
	I0818 12:09:18.678108    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69: {Name:mk412ce60d50ec37c24febde03f7225e8a48a24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678466    3976 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 ...
	I0818 12:09:18.678480    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69: {Name:mke31239238122280f7cbf00316b2acd43533e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678743    3976 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:09:18.678987    3976 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:09:18.679293    3976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:09:18.679306    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:09:18.679332    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:09:18.679353    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:09:18.679374    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:09:18.679394    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:09:18.679414    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:09:18.679441    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:09:18.679462    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:09:18.679567    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:09:18.679618    3976 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:09:18.679629    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:09:18.679662    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:09:18.679695    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:09:18.679735    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:09:18.679815    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:18.679851    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:09:18.679895    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:18.679917    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:09:18.680416    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:09:18.731491    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:09:18.777149    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:09:18.836957    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:09:18.879727    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:09:18.904838    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:09:18.933787    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:09:18.969389    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:09:18.994753    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:09:19.013849    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:09:19.033471    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:09:19.052595    3976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:09:19.066128    3976 ssh_runner.go:195] Run: openssl version
	I0818 12:09:19.070271    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:09:19.079228    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082728    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082763    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.086877    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:09:19.095804    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:09:19.104889    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108208    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108241    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.112406    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:09:19.121720    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:09:19.130845    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134345    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134389    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.138941    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:09:19.148376    3976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:09:19.151715    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:09:19.155985    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:09:19.160273    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:09:19.165064    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:09:19.169962    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:09:19.174244    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:09:19.178473    3976 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:19.178593    3976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:09:19.190838    3976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:09:19.199172    3976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:09:19.199186    3976 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:09:19.199227    3976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:09:19.207402    3976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:09:19.207710    3976 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.207791    3976 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:09:19.207967    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.208584    3976 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.208770    3976 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x52acf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:09:19.209064    3976 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:09:19.209255    3976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:09:19.217108    3976 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:09:19.217125    3976 kubeadm.go:597] duration metric: took 17.934031ms to restartPrimaryControlPlane
	I0818 12:09:19.217132    3976 kubeadm.go:394] duration metric: took 38.665023ms to StartCluster
	I0818 12:09:19.217145    3976 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217216    3976 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.217617    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217869    3976 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:09:19.217886    3976 start.go:241] waiting for startup goroutines ...
	I0818 12:09:19.217906    3976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:09:19.217983    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.263234    3976 out.go:177] * Enabled addons: 
	I0818 12:09:19.284302    3976 addons.go:510] duration metric: took 66.388858ms for enable addons: enabled=[]
	I0818 12:09:19.284387    3976 start.go:246] waiting for cluster config update ...
	I0818 12:09:19.284400    3976 start.go:255] writing updated cluster config ...
	I0818 12:09:19.306484    3976 out.go:201] 
	I0818 12:09:19.327608    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.327742    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.350369    3976 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:09:19.392104    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:19.392164    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:19.392336    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:19.392355    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:19.392486    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.393415    3976 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:19.393522    3976 start.go:364] duration metric: took 80.918µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:09:19.393546    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:19.393556    3976 fix.go:54] fixHost starting: m02
	I0818 12:09:19.393965    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:19.393990    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:19.403655    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0818 12:09:19.404217    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:19.404634    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:19.404650    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:19.405004    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:19.405118    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.405222    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:19.405303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.405380    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:19.406287    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.406302    3976 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:09:19.406312    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	W0818 12:09:19.406463    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:19.448356    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:09:19.469229    3976 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:09:19.469501    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.469542    3976 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:09:19.471314    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.471327    3976 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3847 is in state "Stopped"
	I0818 12:09:19.471351    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:09:19.471584    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:09:19.500704    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:09:19.500730    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:19.500855    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500929    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:19.500977    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:19.500998    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:19.502361    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Pid is 3997
	I0818 12:09:19.502828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:09:19.502885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.502920    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:09:19.504725    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:09:19.504780    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:19.504828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:09:19.504848    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:19.504870    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:19.504882    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:19.504895    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:09:19.504900    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:09:19.504907    3976 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:09:19.505665    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:19.505858    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.506316    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:19.506328    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.506474    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:19.506602    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:19.506707    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506790    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506894    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:19.507039    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:19.507197    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:19.507205    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:19.510551    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:19.519215    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:19.520168    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:19.520203    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:19.520228    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:19.520254    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:19.902342    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:19.902357    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:20.017440    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:20.017463    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:20.017471    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:20.017477    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:20.018303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:20.018315    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:25.632462    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:25.632549    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:25.632559    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:25.657887    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:28.954523    3976 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0818 12:09:32.012675    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:32.012690    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012844    3976 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:09:32.012857    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012969    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.013100    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.013206    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013295    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013399    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.013577    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.013797    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.013807    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:09:32.083655    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:09:32.083671    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.083802    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.083888    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.083968    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.084051    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.084177    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.084328    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.084343    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:32.145743    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:32.145757    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:32.145771    3976 buildroot.go:174] setting up certificates
	I0818 12:09:32.145778    3976 provision.go:84] configureAuth start
	I0818 12:09:32.145785    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.145913    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:32.146013    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.146119    3976 provision.go:143] copyHostCerts
	I0818 12:09:32.146155    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146207    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:32.146213    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146346    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:32.146563    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146599    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:32.146604    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146673    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:32.146816    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146847    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:32.146852    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146916    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:32.147063    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:09:32.439235    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:32.439288    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:32.439303    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.439451    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.439555    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.439662    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.439767    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:32.473899    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:32.473971    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:32.492902    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:32.492977    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:09:32.512205    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:32.512269    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 12:09:32.531269    3976 provision.go:87] duration metric: took 385.496037ms to configureAuth
	I0818 12:09:32.531282    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:32.531440    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:32.531454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:32.531586    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.531687    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.531797    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531905    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531985    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.532087    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.532212    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.532220    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:32.586134    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:32.586145    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:32.586228    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:32.586239    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.586366    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.586454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586566    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586649    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.586801    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.586940    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.586986    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:32.654663    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:32.654688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.654820    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.654904    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.654974    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.655053    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.655180    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.655330    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.655343    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:34.321102    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:34.321115    3976 machine.go:96] duration metric: took 14.8152512s to provisionDockerMachine
	I0818 12:09:34.321123    3976 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:09:34.321131    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:34.321140    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.321324    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:34.321348    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.321440    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.321528    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.321619    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.321715    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.356724    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:34.363921    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:34.363935    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:34.364038    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:34.364185    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:34.364192    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:34.364347    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:34.379409    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:34.407459    3976 start.go:296] duration metric: took 86.328927ms for postStartSetup
	I0818 12:09:34.407481    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.407638    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:34.407658    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.407738    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.407823    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.407908    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.407985    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.441305    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:34.441365    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:34.475756    3976 fix.go:56] duration metric: took 15.082665832s for fixHost
	I0818 12:09:34.475780    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.475917    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.476014    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476109    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476204    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.476334    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:34.476475    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:34.476483    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:34.531245    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008174.705830135
	
	I0818 12:09:34.531256    3976 fix.go:216] guest clock: 1724008174.705830135
	I0818 12:09:34.531265    3976 fix.go:229] Guest: 2024-08-18 12:09:34.705830135 -0700 PDT Remote: 2024-08-18 12:09:34.475769 -0700 PDT m=+34.122913514 (delta=230.061135ms)
	I0818 12:09:34.531276    3976 fix.go:200] guest clock delta is within tolerance: 230.061135ms
	I0818 12:09:34.531281    3976 start.go:83] releasing machines lock for "ha-373000-m02", held for 15.138221498s
	I0818 12:09:34.531298    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.531428    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:34.555141    3976 out.go:177] * Found network options:
	I0818 12:09:34.576875    3976 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:09:34.597784    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.597830    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598932    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.599031    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:34.599086    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:09:34.599150    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.599257    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:09:34.599278    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.599308    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599482    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599521    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599684    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599720    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599871    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599921    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.600032    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:09:34.631739    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:34.631799    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:34.677593    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:34.677615    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.677737    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:34.693773    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:34.702951    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:34.711799    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:34.711840    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:34.720906    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.729957    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:34.738902    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.747932    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:34.757312    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:34.766375    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:34.775307    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:34.784400    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:34.792630    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:34.801021    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:34.911872    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:34.930682    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.930753    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:34.944697    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.956782    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:34.974233    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.986114    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:34.998297    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:35.018378    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:35.029759    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:35.044553    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:35.047654    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:35.055897    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:35.069339    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:35.163048    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:35.263866    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:35.263888    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:35.281642    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:35.375004    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:10:36.400829    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.027707476s)
	I0818 12:10:36.400907    3976 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:10:36.437434    3976 out.go:201] 
	W0818 12:10:36.459246    3976 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:10:36.459348    3976 out.go:270] * 
	W0818 12:10:36.460605    3976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:10:36.503171    3976 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453438095Z" level=info msg="shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453510509Z" level=warning msg="cleaning up after shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453519178Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1169]: time="2024-08-18T19:09:46.453809871Z" level=info msg="ignoring event" container=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461847284Z" level=info msg="shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461900797Z" level=warning msg="cleaning up after shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461909634Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1169]: time="2024-08-18T19:09:47.462210879Z" level=info msg="ignoring event" container=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870147305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870333575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870347019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870447403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866261869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866371963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866695913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508003021Z" level=info msg="shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508123156Z" level=warning msg="cleaning up after shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508131683Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.508457282Z" level=info msg="ignoring event" container=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.520349911Z" level=warning msg="cleanup warnings time=\"2024-08-18T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569676280Z" level=info msg="shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569734174Z" level=warning msg="cleaning up after shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569742722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.569876898Z" level=info msg="ignoring event" container=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e27aa53db964       604f5db92eaa8       31 seconds ago       Exited              kube-apiserver            3                   5f2bcb86e47be       kube-apiserver-ha-373000
	24788de6a779b       045733566833c       33 seconds ago       Exited              kube-controller-manager   4                   45b85b05f9eab       kube-controller-manager-ha-373000
	e7bf93d680505       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   37cbb7af9134a       kube-vip-ha-373000
	5bb7217cec87f       1766f54c897f0       About a minute ago   Running             kube-scheduler            2                   11d6e68c74890       kube-scheduler-ha-373000
	4ad014ace2b0a       2e96e5913fc06       About a minute ago   Running             etcd                      2                   4905344ca55ee       etcd-ha-373000
	eb459a6cac5c5       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d       4 minutes ago        Exited              coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	09b8ded75e80f       cbb01a7bd410d       4 minutes ago        Exited              coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e       4 minutes ago        Exited              kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	ebe78e53d91d8       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0       5 minutes ago        Exited              kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0818 19:10:37.932906    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:37.934579    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:37.935772    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:37.936997    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:37.938589    2672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035419] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007963] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691053] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000000] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.891457] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.229875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.300202] systemd-fstab-generator[467]: Ignoring "noauto" option for root device
	[  +0.101114] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +2.001813] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.247527] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.100995] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.114396] systemd-fstab-generator[1161]: Ignoring "noauto" option for root device
	[  +0.050935] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.471749] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.100344] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.088670] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.117158] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.433473] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +6.511307] kauditd_printk_skb: 168 callbacks suppressed
	[ +21.355887] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [4ad014ace2b0] <==
	{"level":"info","ts":"2024-08-18T19:10:32.043145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:32.043355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:33.840298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:33.840378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:33.840396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:33.840418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:34.393675Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:34.894538Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:35.400753Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:35.507859Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-373000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-08-18T19:10:35.521151Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:35.522345Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-18T19:10:35.645755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.645965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.646145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.646837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:35.902167Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:36.409295Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:36.909620Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:37.410029Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-18T19:10:37.439596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.439912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.440199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.440412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:37.910246Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:08:52.581006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:49.021104Z","time spent":"3.559898593s","remote":"127.0.0.1:56420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.986891044s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581101Z","caller":"traceutil/trace.go:171","msg":"trace[1676890744] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"6.986905757s","start":"2024-08-18T19:08:45.594192Z","end":"2024-08-18T19:08:52.581098Z","steps":["trace[1676890744] 'agreement among raft nodes before linearized reading'  (duration: 6.986891942s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:45.594168Z","time spent":"6.986940437s","remote":"127.0.0.1:56392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.633749967s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581170Z","caller":"traceutil/trace.go:171","msg":"trace[1130682409] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"5.633762365s","start":"2024-08-18T19:08:46.947405Z","end":"2024-08-18T19:08:52.581167Z","steps":["trace[1130682409] 'agreement among raft nodes before linearized reading'  (duration: 5.633750027s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581180Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:46.947373Z","time spent":"5.633803888s","remote":"127.0.0.1:56504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:52.217567Z","time spent":"363.656855ms","remote":"127.0.0.1:56498","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.608176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:08:52.608248Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:08:52.608286Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:08:52.608395Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608428Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608446Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608595Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.610214Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610348Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 19:10:38 up 1 min,  0 users,  load average: 0.17, 0.10, 0.04
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318236       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:23.318272       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318358       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:23.318384       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:23.318431       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:23.318455       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:23.318492       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:23.318516       1 main.go:299] handling current node
	I0818 19:08:33.318121       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:33.318160       1 main.go:299] handling current node
	I0818 19:08:33.318171       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:33.318175       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:33.318256       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:33.318261       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314133       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:43.314185       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314278       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:43.314360       1 main.go:299] handling current node
	I0818 19:08:43.314444       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:43.314482       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7e27aa53db96] <==
	I0818 19:10:06.959907       1 options.go:228] external host was not specified, using 192.169.0.5
	I0818 19:10:06.961347       1 server.go:142] Version: v1.31.0
	I0818 19:10:06.961387       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:07.546161       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:10:07.549946       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:10:07.552371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:10:07.552381       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:10:07.552555       1 instance.go:232] Using reconciler: lease
	W0818 19:10:27.545475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:10:27.545529       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:10:27.554420       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0818 19:10:27.554432       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [24788de6a779] <==
	I0818 19:10:05.103965       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:10:05.483625       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:10:05.483663       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:05.484840       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:10:05.484954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:10:05.484863       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:10:05.485038       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:10:27.488487       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5bb7217cec87] <==
	E0818 19:10:28.561278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48758->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer
	W0818 19:10:28.562424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0818 19:10:28.562490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563330       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563535       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.680351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:28.680700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:10:29.353980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:29.354184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:08:52.663744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0818 19:08:52.664520       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:08:52.664805       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0818 19:08:52.665618       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 18 19:10:24 ha-373000 kubelet[1587]: I0818 19:10:24.149998    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.363782    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.364415    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.784218    1587 scope.go:117] "RemoveContainer" containerID="6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.785184    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.785346    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.792994    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.793092    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.797222    1587 scope.go:117] "RemoveContainer" containerID="0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e"
	Aug 18 19:10:28 ha-373000 kubelet[1587]: E0818 19:10:28.895968    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: I0818 19:10:29.422762    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: E0818 19:10:29.423034    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:32 ha-373000 kubelet[1587]: E0818 19:10:32.507755    1587 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-373000.17ece84946cc9aa1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-373000,UID:ha-373000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-373000,},FirstTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,LastTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-373000,}"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.051084    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: E0818 19:10:33.051343    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.366165    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: I0818 19:10:34.986485    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: E0818 19:10:34.987184    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579478    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579545    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: I0818 19:10:35.861801    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.861956    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: W0818 19:10:38.650520    1587 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.650612    1587 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.896386    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000: exit status 2 (149.427724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-373000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (98.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-373000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-373000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-373000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-373000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000: exit status 2 (149.589788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (2.093832712s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	| node    | ha-373000 node delete m03 -v=7                                                                                               | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-373000 stop -v=7                                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:09 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true                                                                                                     | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:09 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:09:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:09:00.388954    3976 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.389224    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389230    3976 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.389234    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389403    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.390788    3976 out.go:352] Setting JSON to false
	I0818 12:09:00.412980    3976 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2311,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:09:00.413073    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:09:00.435491    3976 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:09:00.478012    3976 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:09:00.478041    3976 notify.go:220] Checking for updates...
	I0818 12:09:00.520842    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:00.541902    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:09:00.562974    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:09:00.583978    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:09:00.604937    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:09:00.626633    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.627309    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.627392    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.636929    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0818 12:09:00.637287    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.637735    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.637744    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.637948    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.638063    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.638277    3976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:09:00.638525    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.638545    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.646880    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0818 12:09:00.647224    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.647595    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.647613    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.647826    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.647950    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.676977    3976 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:09:00.718931    3976 start.go:297] selected driver: hyperkit
	I0818 12:09:00.718961    3976 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.719183    3976 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:09:00.719386    3976 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.719595    3976 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:09:00.729307    3976 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:09:00.733175    3976 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.733199    3976 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:09:00.735834    3976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:09:00.735880    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:00.735888    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:00.735960    3976 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.736064    3976 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.757023    3976 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:09:00.777783    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:00.777901    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:09:00.777924    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:00.778128    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:00.778148    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:00.778333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.779289    3976 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:00.779483    3976 start.go:364] duration metric: took 143.76µs to acquireMachinesLock for "ha-373000"
	I0818 12:09:00.779521    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:00.779537    3976 fix.go:54] fixHost starting: 
	I0818 12:09:00.779956    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.779984    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.789309    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0818 12:09:00.789666    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.790031    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.790040    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.790251    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.790366    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.790468    3976 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.790556    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.790639    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.791548    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.791593    3976 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:09:00.791619    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:09:00.791703    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:00.833742    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:09:00.854617    3976 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:09:00.854890    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.854917    3976 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:09:00.856657    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.856693    3976 main.go:141] libmachine: (ha-373000) DBG | pid 3836 is in state "Stopped"
	I0818 12:09:00.856718    3976 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:09:00.856984    3976 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:09:00.989123    3976 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:09:00.989174    3976 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:00.989237    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989280    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989323    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:00.989366    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:00.989381    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:00.990799    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Pid is 3990
	I0818 12:09:00.991176    3976 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:09:00.991196    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.991218    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:09:00.993000    3976 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:09:00.993068    3976 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:00.993082    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:00.993090    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:00.993097    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:00.993119    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:09:00.993129    3976 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:09:00.993139    3976 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:09:00.993184    3976 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:09:00.994094    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:00.994333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.994945    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:00.994967    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.995142    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:00.995271    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:00.995391    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995521    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995632    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:00.995768    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:00.996051    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:00.996062    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:00.999904    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:01.080830    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:01.081571    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.081587    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.081595    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.081604    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.460230    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:01.460268    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:01.574713    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.574755    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.574768    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.574787    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.575699    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:01.575710    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:07.163001    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:07.163029    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:07.163053    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:07.186829    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:12.062770    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:12.062784    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.062975    3976 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:09:12.062986    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.063087    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.063175    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.063280    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063371    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063480    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.063605    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.063750    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.063759    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:09:12.131801    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:09:12.131819    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.131954    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.132061    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132144    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132224    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.132376    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.132528    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.132546    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:12.199331    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:12.199349    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:12.199369    3976 buildroot.go:174] setting up certificates
	I0818 12:09:12.199383    3976 provision.go:84] configureAuth start
	I0818 12:09:12.199391    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.199540    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:12.199634    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.199719    3976 provision.go:143] copyHostCerts
	I0818 12:09:12.199749    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199819    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:12.199828    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199960    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:12.200176    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200222    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:12.200227    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200306    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:12.200461    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200505    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:12.200509    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200584    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:12.200731    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:09:12.289022    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:12.289076    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:12.289091    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.289227    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.289322    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.289416    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.289508    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:12.325856    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:12.325929    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:12.345953    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:12.346012    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:09:12.366027    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:12.366092    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:09:12.386212    3976 provision.go:87] duration metric: took 186.823558ms to configureAuth
	I0818 12:09:12.386225    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:12.386405    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:12.386418    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:12.386551    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.386643    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.386731    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386817    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386909    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.387025    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.387159    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.387167    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:12.445833    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:12.445851    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:12.445930    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:12.445943    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.446067    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.446173    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446279    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446389    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.446543    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.446679    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.446725    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:12.516077    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:12.516100    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.516233    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.516348    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516437    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516526    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.516667    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.516813    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.516825    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:14.219167    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:14.219181    3976 machine.go:96] duration metric: took 13.22463913s to provisionDockerMachine
	I0818 12:09:14.219193    3976 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:09:14.219201    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:14.219211    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.219390    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:14.219417    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.219519    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.219630    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.219724    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.219808    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.259561    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:14.263959    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:14.263976    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:14.264080    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:14.264273    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:14.264280    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:14.264487    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:14.272283    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:14.302942    3976 start.go:296] duration metric: took 83.742133ms for postStartSetup
	I0818 12:09:14.302965    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.303146    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:14.303160    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.303248    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.303361    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.303436    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.303526    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.338080    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:14.338142    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:14.391638    3976 fix.go:56] duration metric: took 13.612527396s for fixHost
	I0818 12:09:14.391662    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.391810    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.391899    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.391991    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.392074    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.392222    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:14.392364    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:14.392372    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:14.449746    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008154.620949792
	
	I0818 12:09:14.449760    3976 fix.go:216] guest clock: 1724008154.620949792
	I0818 12:09:14.449772    3976 fix.go:229] Guest: 2024-08-18 12:09:14.620949792 -0700 PDT Remote: 2024-08-18 12:09:14.391652 -0700 PDT m=+14.038170292 (delta=229.297792ms)
	I0818 12:09:14.449789    3976 fix.go:200] guest clock delta is within tolerance: 229.297792ms
	I0818 12:09:14.449793    3976 start.go:83] releasing machines lock for "ha-373000", held for 13.670724274s
	I0818 12:09:14.449812    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.449942    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:14.450037    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450349    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450474    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450548    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:14.450580    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450637    3976 ssh_runner.go:195] Run: cat /version.json
	I0818 12:09:14.450648    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450688    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450746    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450782    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450836    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450854    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450935    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450952    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.451045    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.541757    3976 ssh_runner.go:195] Run: systemctl --version
	I0818 12:09:14.546793    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:09:14.550801    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:14.550839    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:14.564129    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:14.564141    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.564243    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.581664    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:14.590425    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:14.599077    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:14.599120    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:14.607868    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.616526    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:14.625074    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.633725    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:14.642461    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:14.651030    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:14.659717    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:14.668509    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:14.676419    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:14.684357    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:14.777696    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:14.795379    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.795465    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:14.808091    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.819351    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:14.834858    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.845068    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.855088    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:14.879151    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.889782    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.904555    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:14.907616    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:14.914893    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:14.928498    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:15.021302    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:15.126534    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:15.126611    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:15.141437    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:15.238491    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:09:17.633635    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.395193434s)
	I0818 12:09:17.633701    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:09:17.644119    3976 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:09:17.657413    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.668074    3976 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:09:17.762478    3976 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:09:17.858367    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:17.948600    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:09:17.962148    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.972120    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.070649    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:09:18.132791    3976 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:09:18.132869    3976 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:09:18.137140    3976 start.go:563] Will wait 60s for crictl version
	I0818 12:09:18.137200    3976 ssh_runner.go:195] Run: which crictl
	I0818 12:09:18.140608    3976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:09:18.167352    3976 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:09:18.167422    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.186476    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.224169    3976 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:09:18.224214    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:18.224595    3976 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:09:18.229086    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.238631    3976 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:09:18.238717    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:18.238780    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.252546    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.252557    3976 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:09:18.252627    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.266684    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.266703    3976 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:09:18.266713    3976 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:09:18.266790    3976 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:09:18.266861    3976 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:09:18.304192    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:18.304204    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:18.304213    3976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:09:18.304229    3976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:09:18.304320    3976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:09:18.304334    3976 kube-vip.go:115] generating kube-vip config ...
	I0818 12:09:18.304382    3976 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:09:18.316732    3976 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:09:18.316793    3976 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:09:18.316840    3976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:09:18.324597    3976 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:09:18.324641    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:09:18.331779    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:09:18.345158    3976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:09:18.358298    3976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:09:18.372286    3976 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:09:18.385485    3976 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:09:18.388341    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.397526    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.496612    3976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:09:18.511160    3976 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:09:18.511172    3976 certs.go:194] generating shared ca certs ...
	I0818 12:09:18.511184    3976 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.511356    3976 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:09:18.511436    3976 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:09:18.511446    3976 certs.go:256] generating profile certs ...
	I0818 12:09:18.511538    3976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:09:18.511564    3976 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69
	I0818 12:09:18.511579    3976 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0818 12:09:18.678090    3976 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 ...
	I0818 12:09:18.678108    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69: {Name:mk412ce60d50ec37c24febde03f7225e8a48a24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678466    3976 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 ...
	I0818 12:09:18.678480    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69: {Name:mke31239238122280f7cbf00316b2acd43533e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678743    3976 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:09:18.678987    3976 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:09:18.679293    3976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:09:18.679306    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:09:18.679332    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:09:18.679353    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:09:18.679374    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:09:18.679394    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:09:18.679414    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:09:18.679441    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:09:18.679462    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:09:18.679567    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:09:18.679618    3976 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:09:18.679629    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:09:18.679662    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:09:18.679695    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:09:18.679735    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:09:18.679815    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:18.679851    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:09:18.679895    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:18.679917    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:09:18.680416    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:09:18.731491    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:09:18.777149    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:09:18.836957    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:09:18.879727    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:09:18.904838    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:09:18.933787    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:09:18.969389    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:09:18.994753    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:09:19.013849    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:09:19.033471    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:09:19.052595    3976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:09:19.066128    3976 ssh_runner.go:195] Run: openssl version
	I0818 12:09:19.070271    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:09:19.079228    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082728    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082763    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.086877    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:09:19.095804    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:09:19.104889    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108208    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108241    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.112406    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:09:19.121720    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:09:19.130845    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134345    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134389    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.138941    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:09:19.148376    3976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:09:19.151715    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:09:19.155985    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:09:19.160273    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:09:19.165064    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:09:19.169962    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:09:19.174244    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:09:19.178473    3976 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:19.178593    3976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:09:19.190838    3976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:09:19.199172    3976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:09:19.199186    3976 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:09:19.199227    3976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:09:19.207402    3976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:09:19.207710    3976 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.207791    3976 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:09:19.207967    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.208584    3976 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.208770    3976 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x52acf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:09:19.209064    3976 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:09:19.209255    3976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:09:19.217108    3976 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:09:19.217125    3976 kubeadm.go:597] duration metric: took 17.934031ms to restartPrimaryControlPlane
	I0818 12:09:19.217132    3976 kubeadm.go:394] duration metric: took 38.665023ms to StartCluster
	I0818 12:09:19.217145    3976 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217216    3976 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.217617    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217869    3976 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:09:19.217886    3976 start.go:241] waiting for startup goroutines ...
	I0818 12:09:19.217906    3976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:09:19.217983    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.263234    3976 out.go:177] * Enabled addons: 
	I0818 12:09:19.284302    3976 addons.go:510] duration metric: took 66.388858ms for enable addons: enabled=[]
	I0818 12:09:19.284387    3976 start.go:246] waiting for cluster config update ...
	I0818 12:09:19.284400    3976 start.go:255] writing updated cluster config ...
	I0818 12:09:19.306484    3976 out.go:201] 
	I0818 12:09:19.327608    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.327742    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.350369    3976 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:09:19.392104    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:19.392164    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:19.392336    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:19.392355    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:19.392486    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.393415    3976 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:19.393522    3976 start.go:364] duration metric: took 80.918µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:09:19.393546    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:19.393556    3976 fix.go:54] fixHost starting: m02
	I0818 12:09:19.393965    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:19.393990    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:19.403655    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0818 12:09:19.404217    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:19.404634    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:19.404650    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:19.405004    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:19.405118    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.405222    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:19.405303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.405380    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:19.406287    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.406302    3976 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:09:19.406312    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	W0818 12:09:19.406463    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:19.448356    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:09:19.469229    3976 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:09:19.469501    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.469542    3976 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:09:19.471314    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.471327    3976 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3847 is in state "Stopped"
	I0818 12:09:19.471351    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:09:19.471584    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:09:19.500704    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:09:19.500730    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:19.500855    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500929    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:19.500977    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:19.500998    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:19.502361    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Pid is 3997
	I0818 12:09:19.502828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:09:19.502885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.502920    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:09:19.504725    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:09:19.504780    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:19.504828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:09:19.504848    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:19.504870    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:19.504882    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:19.504895    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:09:19.504900    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:09:19.504907    3976 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:09:19.505665    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:19.505858    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.506316    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:19.506328    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.506474    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:19.506602    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:19.506707    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506790    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506894    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:19.507039    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:19.507197    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:19.507205    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:19.510551    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:19.519215    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:19.520168    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:19.520203    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:19.520228    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:19.520254    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:19.902342    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:19.902357    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:20.017440    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:20.017463    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:20.017471    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:20.017477    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:20.018303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:20.018315    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:25.632462    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:25.632549    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:25.632559    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:25.657887    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:28.954523    3976 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0818 12:09:32.012675    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:32.012690    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012844    3976 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:09:32.012857    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012969    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.013100    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.013206    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013295    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013399    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.013577    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.013797    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.013807    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:09:32.083655    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:09:32.083671    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.083802    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.083888    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.083968    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.084051    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.084177    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.084328    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.084343    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:32.145743    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:32.145757    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:32.145771    3976 buildroot.go:174] setting up certificates
	I0818 12:09:32.145778    3976 provision.go:84] configureAuth start
	I0818 12:09:32.145785    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.145913    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:32.146013    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.146119    3976 provision.go:143] copyHostCerts
	I0818 12:09:32.146155    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146207    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:32.146213    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146346    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:32.146563    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146599    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:32.146604    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146673    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:32.146816    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146847    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:32.146852    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146916    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:32.147063    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:09:32.439235    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:32.439288    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:32.439303    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.439451    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.439555    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.439662    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.439767    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:32.473899    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:32.473971    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:32.492902    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:32.492977    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:09:32.512205    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:32.512269    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 12:09:32.531269    3976 provision.go:87] duration metric: took 385.496037ms to configureAuth
	I0818 12:09:32.531282    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:32.531440    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:32.531454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:32.531586    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.531687    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.531797    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531905    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531985    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.532087    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.532212    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.532220    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:32.586134    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:32.586145    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:32.586228    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:32.586239    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.586366    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.586454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586566    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586649    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.586801    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.586940    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.586986    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:32.654663    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:32.654688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.654820    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.654904    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.654974    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.655053    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.655180    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.655330    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.655343    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:34.321102    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:34.321115    3976 machine.go:96] duration metric: took 14.8152512s to provisionDockerMachine
	I0818 12:09:34.321123    3976 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:09:34.321131    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:34.321140    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.321324    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:34.321348    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.321440    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.321528    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.321619    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.321715    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.356724    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:34.363921    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:34.363935    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:34.364038    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:34.364185    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:34.364192    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:34.364347    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:34.379409    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:34.407459    3976 start.go:296] duration metric: took 86.328927ms for postStartSetup
	I0818 12:09:34.407481    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.407638    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:34.407658    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.407738    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.407823    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.407908    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.407985    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.441305    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:34.441365    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:34.475756    3976 fix.go:56] duration metric: took 15.082665832s for fixHost
	I0818 12:09:34.475780    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.475917    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.476014    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476109    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476204    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.476334    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:34.476475    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:34.476483    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:34.531245    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008174.705830135
	
	I0818 12:09:34.531256    3976 fix.go:216] guest clock: 1724008174.705830135
	I0818 12:09:34.531265    3976 fix.go:229] Guest: 2024-08-18 12:09:34.705830135 -0700 PDT Remote: 2024-08-18 12:09:34.475769 -0700 PDT m=+34.122913514 (delta=230.061135ms)
	I0818 12:09:34.531276    3976 fix.go:200] guest clock delta is within tolerance: 230.061135ms
	I0818 12:09:34.531281    3976 start.go:83] releasing machines lock for "ha-373000-m02", held for 15.138221498s
	I0818 12:09:34.531298    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.531428    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:34.555141    3976 out.go:177] * Found network options:
	I0818 12:09:34.576875    3976 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:09:34.597784    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.597830    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598932    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.599031    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:34.599086    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:09:34.599150    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.599257    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:09:34.599278    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.599308    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599482    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599521    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599684    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599720    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599871    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599921    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.600032    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:09:34.631739    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:34.631799    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:34.677593    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:34.677615    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.677737    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:34.693773    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:34.702951    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:34.711799    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:34.711840    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:34.720906    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.729957    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:34.738902    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.747932    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:34.757312    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:34.766375    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:34.775307    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:34.784400    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:34.792630    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:34.801021    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:34.911872    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:34.930682    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.930753    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:34.944697    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.956782    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:34.974233    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.986114    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:34.998297    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:35.018378    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:35.029759    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:35.044553    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:35.047654    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:35.055897    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:35.069339    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:35.163048    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:35.263866    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:35.263888    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:35.281642    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:35.375004    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:10:36.400829    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.027707476s)
	I0818 12:10:36.400907    3976 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:10:36.437434    3976 out.go:201] 
	W0818 12:10:36.459246    3976 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:10:36.459348    3976 out.go:270] * 
	W0818 12:10:36.460605    3976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:10:36.503171    3976 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453438095Z" level=info msg="shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453510509Z" level=warning msg="cleaning up after shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453519178Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1169]: time="2024-08-18T19:09:46.453809871Z" level=info msg="ignoring event" container=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461847284Z" level=info msg="shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461900797Z" level=warning msg="cleaning up after shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461909634Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1169]: time="2024-08-18T19:09:47.462210879Z" level=info msg="ignoring event" container=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870147305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870333575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870347019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870447403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866261869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866371963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866695913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508003021Z" level=info msg="shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508123156Z" level=warning msg="cleaning up after shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508131683Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.508457282Z" level=info msg="ignoring event" container=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.520349911Z" level=warning msg="cleanup warnings time=\"2024-08-18T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569676280Z" level=info msg="shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569734174Z" level=warning msg="cleaning up after shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569742722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.569876898Z" level=info msg="ignoring event" container=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e27aa53db964       604f5db92eaa8       34 seconds ago       Exited              kube-apiserver            3                   5f2bcb86e47be       kube-apiserver-ha-373000
	24788de6a779b       045733566833c       36 seconds ago       Exited              kube-controller-manager   4                   45b85b05f9eab       kube-controller-manager-ha-373000
	e7bf93d680505       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   37cbb7af9134a       kube-vip-ha-373000
	5bb7217cec87f       1766f54c897f0       About a minute ago   Running             kube-scheduler            2                   11d6e68c74890       kube-scheduler-ha-373000
	4ad014ace2b0a       2e96e5913fc06       About a minute ago   Running             etcd                      2                   4905344ca55ee       etcd-ha-373000
	eb459a6cac5c5       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d       4 minutes ago        Exited              coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	09b8ded75e80f       cbb01a7bd410d       4 minutes ago        Exited              coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e       4 minutes ago        Exited              kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	ebe78e53d91d8       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0       5 minutes ago        Exited              kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0818 19:10:40.618965    2863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:40.620649    2863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:40.621991    2863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:40.623432    2863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:40.624811    2863 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035419] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007963] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691053] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000000] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.891457] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.229875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.300202] systemd-fstab-generator[467]: Ignoring "noauto" option for root device
	[  +0.101114] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +2.001813] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.247527] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.100995] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.114396] systemd-fstab-generator[1161]: Ignoring "noauto" option for root device
	[  +0.050935] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.471749] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.100344] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.088670] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.117158] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.433473] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +6.511307] kauditd_printk_skb: 168 callbacks suppressed
	[ +21.355887] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [4ad014ace2b0] <==
	{"level":"warn","ts":"2024-08-18T19:10:35.522345Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-18T19:10:35.645755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.645965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.646145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:35.646837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:35.902167Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:36.409295Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:36.909620Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:37.410029Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-18T19:10:37.439596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.439912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.440199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:37.440412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:37.910246Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:38.411386Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:38.911588Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-18T19:10:39.239512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:39.412430Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:39.917438Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:40.418292Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:40.523142Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:40.523187Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:08:52.581006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:49.021104Z","time spent":"3.559898593s","remote":"127.0.0.1:56420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.986891044s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581101Z","caller":"traceutil/trace.go:171","msg":"trace[1676890744] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"6.986905757s","start":"2024-08-18T19:08:45.594192Z","end":"2024-08-18T19:08:52.581098Z","steps":["trace[1676890744] 'agreement among raft nodes before linearized reading'  (duration: 6.986891942s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:45.594168Z","time spent":"6.986940437s","remote":"127.0.0.1:56392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.633749967s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581170Z","caller":"traceutil/trace.go:171","msg":"trace[1130682409] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"5.633762365s","start":"2024-08-18T19:08:46.947405Z","end":"2024-08-18T19:08:52.581167Z","steps":["trace[1130682409] 'agreement among raft nodes before linearized reading'  (duration: 5.633750027s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581180Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:46.947373Z","time spent":"5.633803888s","remote":"127.0.0.1:56504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:52.217567Z","time spent":"363.656855ms","remote":"127.0.0.1:56498","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.608176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:08:52.608248Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:08:52.608286Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:08:52.608395Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608428Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608446Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608595Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.610214Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610348Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 19:10:41 up 1 min,  0 users,  load average: 0.17, 0.10, 0.04
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318236       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:23.318272       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318358       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:23.318384       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:23.318431       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:23.318455       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:23.318492       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:23.318516       1 main.go:299] handling current node
	I0818 19:08:33.318121       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:33.318160       1 main.go:299] handling current node
	I0818 19:08:33.318171       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:33.318175       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:33.318256       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:33.318261       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314133       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:43.314185       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314278       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:43.314360       1 main.go:299] handling current node
	I0818 19:08:43.314444       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:43.314482       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7e27aa53db96] <==
	I0818 19:10:06.959907       1 options.go:228] external host was not specified, using 192.169.0.5
	I0818 19:10:06.961347       1 server.go:142] Version: v1.31.0
	I0818 19:10:06.961387       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:07.546161       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:10:07.549946       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:10:07.552371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:10:07.552381       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:10:07.552555       1 instance.go:232] Using reconciler: lease
	W0818 19:10:27.545475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:10:27.545529       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:10:27.554420       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0818 19:10:27.554432       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [24788de6a779] <==
	I0818 19:10:05.103965       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:10:05.483625       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:10:05.483663       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:05.484840       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:10:05.484954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:10:05.484863       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:10:05.485038       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:10:27.488487       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5bb7217cec87] <==
	E0818 19:10:28.561278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48758->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer
	W0818 19:10:28.562424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0818 19:10:28.562490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563330       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563535       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.680351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:28.680700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:10:29.353980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:29.354184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:08:52.663744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0818 19:08:52.664520       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:08:52.664805       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0818 19:08:52.665618       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 18 19:10:24 ha-373000 kubelet[1587]: I0818 19:10:24.149998    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.363782    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.364415    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.784218    1587 scope.go:117] "RemoveContainer" containerID="6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.785184    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.785346    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.792994    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.793092    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.797222    1587 scope.go:117] "RemoveContainer" containerID="0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e"
	Aug 18 19:10:28 ha-373000 kubelet[1587]: E0818 19:10:28.895968    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: I0818 19:10:29.422762    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: E0818 19:10:29.423034    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:32 ha-373000 kubelet[1587]: E0818 19:10:32.507755    1587 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-373000.17ece84946cc9aa1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-373000,UID:ha-373000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-373000,},FirstTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,LastTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-373000,}"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.051084    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: E0818 19:10:33.051343    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.366165    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: I0818 19:10:34.986485    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: E0818 19:10:34.987184    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579478    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579545    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: I0818 19:10:35.861801    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.861956    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: W0818 19:10:38.650520    1587 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.650612    1587 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.896386    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000: exit status 2 (147.447933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-373000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (2.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-373000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-373000 --control-plane -v=7 --alsologtostderr: exit status 103 (238.338034ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-373000-m02 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-373000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:10:41.837287    4047 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:10:41.837593    4047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:10:41.837599    4047 out.go:358] Setting ErrFile to fd 2...
	I0818 12:10:41.837602    4047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:10:41.837772    4047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:10:41.838129    4047 mustload.go:65] Loading cluster: ha-373000
	I0818 12:10:41.838455    4047 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:10:41.838828    4047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:10:41.838869    4047 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:10:41.847214    4047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52116
	I0818 12:10:41.847607    4047 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:10:41.848024    4047 main.go:141] libmachine: Using API Version  1
	I0818 12:10:41.848058    4047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:10:41.848298    4047 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:10:41.848425    4047 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:10:41.848511    4047 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:10:41.848571    4047 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:10:41.849551    4047 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:10:41.849795    4047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:10:41.849831    4047 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:10:41.858032    4047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52118
	I0818 12:10:41.858365    4047 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:10:41.858707    4047 main.go:141] libmachine: Using API Version  1
	I0818 12:10:41.858721    4047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:10:41.858939    4047 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:10:41.859072    4047 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:10:41.859406    4047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:10:41.859432    4047 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:10:41.867709    4047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52120
	I0818 12:10:41.868041    4047 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:10:41.868376    4047 main.go:141] libmachine: Using API Version  1
	I0818 12:10:41.868391    4047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:10:41.868610    4047 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:10:41.868711    4047 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:10:41.868787    4047 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:10:41.868859    4047 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:10:41.869792    4047 host.go:66] Checking if "ha-373000-m02" exists ...
	I0818 12:10:41.870034    4047 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:10:41.870056    4047 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:10:41.878562    4047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52122
	I0818 12:10:41.878914    4047 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:10:41.879249    4047 main.go:141] libmachine: Using API Version  1
	I0818 12:10:41.879258    4047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:10:41.879462    4047 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:10:41.879576    4047 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:10:41.879685    4047 api_server.go:166] Checking apiserver status ...
	I0818 12:10:41.879735    4047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:10:41.879758    4047 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:10:41.879870    4047 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:10:41.879960    4047 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:10:41.880077    4047 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:10:41.880164    4047 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	W0818 12:10:41.918071    4047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0818 12:10:41.918247    4047 out.go:270] ! The control-plane node ha-373000 apiserver is not running (will try others): (state=Stopped)
	! The control-plane node ha-373000 apiserver is not running (will try others): (state=Stopped)
	I0818 12:10:41.918255    4047 api_server.go:166] Checking apiserver status ...
	I0818 12:10:41.918304    4047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:10:41.918320    4047 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:10:41.918445    4047 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:10:41.918555    4047 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:10:41.918658    4047 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:10:41.918745    4047 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:10:41.955537    4047 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:10:41.976858    4047 out.go:177] * The control-plane node ha-373000-m02 apiserver is not running: (state=Stopped)
	I0818 12:10:41.998056    4047 out.go:177]   To start a cluster, run: "minikube start -p ha-373000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-373000 --control-plane -v=7 --alsologtostderr" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000: exit status 2 (148.382518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (2.191568443s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	| node    | ha-373000 node delete m03 -v=7                                                                                               | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-373000 stop -v=7                                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:09 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true                                                                                                     | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:09 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-373000                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:10 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:09:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:09:00.388954    3976 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.389224    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389230    3976 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.389234    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389403    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.390788    3976 out.go:352] Setting JSON to false
	I0818 12:09:00.412980    3976 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2311,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:09:00.413073    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:09:00.435491    3976 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:09:00.478012    3976 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:09:00.478041    3976 notify.go:220] Checking for updates...
	I0818 12:09:00.520842    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:00.541902    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:09:00.562974    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:09:00.583978    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:09:00.604937    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:09:00.626633    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.627309    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.627392    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.636929    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0818 12:09:00.637287    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.637735    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.637744    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.637948    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.638063    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.638277    3976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:09:00.638525    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.638545    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.646880    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0818 12:09:00.647224    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.647595    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.647613    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.647826    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.647950    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.676977    3976 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:09:00.718931    3976 start.go:297] selected driver: hyperkit
	I0818 12:09:00.718961    3976 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.719183    3976 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:09:00.719386    3976 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.719595    3976 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:09:00.729307    3976 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:09:00.733175    3976 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.733199    3976 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:09:00.735834    3976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:09:00.735880    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:00.735888    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:00.735960    3976 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.736064    3976 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.757023    3976 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:09:00.777783    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:00.777901    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:09:00.777924    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:00.778128    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:00.778148    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:00.778333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.779289    3976 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:00.779483    3976 start.go:364] duration metric: took 143.76µs to acquireMachinesLock for "ha-373000"
	I0818 12:09:00.779521    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:00.779537    3976 fix.go:54] fixHost starting: 
	I0818 12:09:00.779956    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.779984    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.789309    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0818 12:09:00.789666    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.790031    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.790040    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.790251    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.790366    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.790468    3976 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.790556    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.790639    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.791548    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.791593    3976 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:09:00.791619    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:09:00.791703    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:00.833742    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:09:00.854617    3976 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:09:00.854890    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.854917    3976 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:09:00.856657    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.856693    3976 main.go:141] libmachine: (ha-373000) DBG | pid 3836 is in state "Stopped"
	I0818 12:09:00.856718    3976 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:09:00.856984    3976 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:09:00.989123    3976 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:09:00.989174    3976 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:00.989237    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989280    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989323    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:00.989366    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:00.989381    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:00.990799    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Pid is 3990
	I0818 12:09:00.991176    3976 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:09:00.991196    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.991218    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:09:00.993000    3976 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:09:00.993068    3976 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:00.993082    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:00.993090    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:00.993097    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:00.993119    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:09:00.993129    3976 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:09:00.993139    3976 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:09:00.993184    3976 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:09:00.994094    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:00.994333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.994945    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:00.994967    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.995142    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:00.995271    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:00.995391    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995521    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995632    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:00.995768    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:00.996051    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:00.996062    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:00.999904    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:01.080830    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:01.081571    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.081587    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.081595    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.081604    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.460230    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:01.460268    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:01.574713    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.574755    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.574768    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.574787    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.575699    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:01.575710    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:07.163001    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:07.163029    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:07.163053    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:07.186829    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:12.062770    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:12.062784    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.062975    3976 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:09:12.062986    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.063087    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.063175    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.063280    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063371    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063480    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.063605    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.063750    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.063759    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:09:12.131801    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:09:12.131819    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.131954    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.132061    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132144    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132224    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.132376    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.132528    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.132546    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:12.199331    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:12.199349    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:12.199369    3976 buildroot.go:174] setting up certificates
	I0818 12:09:12.199383    3976 provision.go:84] configureAuth start
	I0818 12:09:12.199391    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.199540    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:12.199634    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.199719    3976 provision.go:143] copyHostCerts
	I0818 12:09:12.199749    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199819    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:12.199828    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199960    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:12.200176    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200222    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:12.200227    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200306    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:12.200461    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200505    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:12.200509    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200584    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:12.200731    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:09:12.289022    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:12.289076    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:12.289091    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.289227    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.289322    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.289416    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.289508    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:12.325856    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:12.325929    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:12.345953    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:12.346012    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:09:12.366027    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:12.366092    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:09:12.386212    3976 provision.go:87] duration metric: took 186.823558ms to configureAuth
	I0818 12:09:12.386225    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:12.386405    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:12.386418    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:12.386551    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.386643    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.386731    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386817    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386909    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.387025    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.387159    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.387167    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:12.445833    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:12.445851    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:12.445930    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:12.445943    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.446067    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.446173    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446279    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446389    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.446543    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.446679    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.446725    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:12.516077    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:12.516100    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.516233    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.516348    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516437    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516526    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.516667    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.516813    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.516825    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:14.219167    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:14.219181    3976 machine.go:96] duration metric: took 13.22463913s to provisionDockerMachine
	I0818 12:09:14.219193    3976 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:09:14.219201    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:14.219211    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.219390    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:14.219417    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.219519    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.219630    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.219724    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.219808    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.259561    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:14.263959    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:14.263976    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:14.264080    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:14.264273    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:14.264280    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:14.264487    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:14.272283    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:14.302942    3976 start.go:296] duration metric: took 83.742133ms for postStartSetup
	I0818 12:09:14.302965    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.303146    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:14.303160    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.303248    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.303361    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.303436    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.303526    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.338080    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:14.338142    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:14.391638    3976 fix.go:56] duration metric: took 13.612527396s for fixHost
	I0818 12:09:14.391662    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.391810    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.391899    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.391991    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.392074    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.392222    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:14.392364    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:14.392372    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:14.449746    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008154.620949792
	
	I0818 12:09:14.449760    3976 fix.go:216] guest clock: 1724008154.620949792
	I0818 12:09:14.449772    3976 fix.go:229] Guest: 2024-08-18 12:09:14.620949792 -0700 PDT Remote: 2024-08-18 12:09:14.391652 -0700 PDT m=+14.038170292 (delta=229.297792ms)
	I0818 12:09:14.449789    3976 fix.go:200] guest clock delta is within tolerance: 229.297792ms
	I0818 12:09:14.449793    3976 start.go:83] releasing machines lock for "ha-373000", held for 13.670724274s
	I0818 12:09:14.449812    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.449942    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:14.450037    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450349    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450474    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450548    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:14.450580    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450637    3976 ssh_runner.go:195] Run: cat /version.json
	I0818 12:09:14.450648    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450688    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450746    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450782    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450836    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450854    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450935    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450952    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.451045    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.541757    3976 ssh_runner.go:195] Run: systemctl --version
	I0818 12:09:14.546793    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:09:14.550801    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:14.550839    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:14.564129    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:14.564141    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.564243    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.581664    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:14.590425    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:14.599077    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:14.599120    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:14.607868    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.616526    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:14.625074    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.633725    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:14.642461    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:14.651030    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:14.659717    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:14.668509    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:14.676419    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:14.684357    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:14.777696    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:14.795379    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.795465    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:14.808091    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.819351    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:14.834858    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.845068    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.855088    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:14.879151    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.889782    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.904555    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:14.907616    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:14.914893    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:14.928498    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:15.021302    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:15.126534    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:15.126611    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:15.141437    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:15.238491    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:09:17.633635    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.395193434s)
	I0818 12:09:17.633701    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:09:17.644119    3976 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:09:17.657413    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.668074    3976 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:09:17.762478    3976 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:09:17.858367    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:17.948600    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:09:17.962148    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.972120    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.070649    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:09:18.132791    3976 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:09:18.132869    3976 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:09:18.137140    3976 start.go:563] Will wait 60s for crictl version
	I0818 12:09:18.137200    3976 ssh_runner.go:195] Run: which crictl
	I0818 12:09:18.140608    3976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:09:18.167352    3976 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:09:18.167422    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.186476    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.224169    3976 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:09:18.224214    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:18.224595    3976 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:09:18.229086    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.238631    3976 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:09:18.238717    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:18.238780    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.252546    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.252557    3976 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:09:18.252627    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.266684    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.266703    3976 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:09:18.266713    3976 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:09:18.266790    3976 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:09:18.266861    3976 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:09:18.304192    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:18.304204    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:18.304213    3976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:09:18.304229    3976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:09:18.304320    3976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:09:18.304334    3976 kube-vip.go:115] generating kube-vip config ...
	I0818 12:09:18.304382    3976 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:09:18.316732    3976 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:09:18.316793    3976 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:09:18.316840    3976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:09:18.324597    3976 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:09:18.324641    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:09:18.331779    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:09:18.345158    3976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:09:18.358298    3976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:09:18.372286    3976 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:09:18.385485    3976 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:09:18.388341    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.397526    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.496612    3976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:09:18.511160    3976 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:09:18.511172    3976 certs.go:194] generating shared ca certs ...
	I0818 12:09:18.511184    3976 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.511356    3976 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:09:18.511436    3976 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:09:18.511446    3976 certs.go:256] generating profile certs ...
	I0818 12:09:18.511538    3976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:09:18.511564    3976 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69
	I0818 12:09:18.511579    3976 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0818 12:09:18.678090    3976 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 ...
	I0818 12:09:18.678108    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69: {Name:mk412ce60d50ec37c24febde03f7225e8a48a24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678466    3976 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 ...
	I0818 12:09:18.678480    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69: {Name:mke31239238122280f7cbf00316b2acd43533e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678743    3976 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:09:18.678987    3976 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:09:18.679293    3976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:09:18.679306    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:09:18.679332    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:09:18.679353    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:09:18.679374    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:09:18.679394    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:09:18.679414    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:09:18.679441    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:09:18.679462    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:09:18.679567    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:09:18.679618    3976 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:09:18.679629    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:09:18.679662    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:09:18.679695    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:09:18.679735    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:09:18.679815    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:18.679851    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:09:18.679895    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:18.679917    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:09:18.680416    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:09:18.731491    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:09:18.777149    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:09:18.836957    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:09:18.879727    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:09:18.904838    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:09:18.933787    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:09:18.969389    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:09:18.994753    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:09:19.013849    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:09:19.033471    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:09:19.052595    3976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:09:19.066128    3976 ssh_runner.go:195] Run: openssl version
	I0818 12:09:19.070271    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:09:19.079228    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082728    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082763    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.086877    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:09:19.095804    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:09:19.104889    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108208    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108241    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.112406    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:09:19.121720    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:09:19.130845    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134345    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134389    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.138941    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:09:19.148376    3976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:09:19.151715    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:09:19.155985    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:09:19.160273    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:09:19.165064    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:09:19.169962    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:09:19.174244    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:09:19.178473    3976 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:19.178593    3976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:09:19.190838    3976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:09:19.199172    3976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:09:19.199186    3976 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:09:19.199227    3976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:09:19.207402    3976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:09:19.207710    3976 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.207791    3976 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:09:19.207967    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.208584    3976 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.208770    3976 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x52acf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:09:19.209064    3976 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:09:19.209255    3976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:09:19.217108    3976 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:09:19.217125    3976 kubeadm.go:597] duration metric: took 17.934031ms to restartPrimaryControlPlane
	I0818 12:09:19.217132    3976 kubeadm.go:394] duration metric: took 38.665023ms to StartCluster
	I0818 12:09:19.217145    3976 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217216    3976 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.217617    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217869    3976 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:09:19.217886    3976 start.go:241] waiting for startup goroutines ...
	I0818 12:09:19.217906    3976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:09:19.217983    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.263234    3976 out.go:177] * Enabled addons: 
	I0818 12:09:19.284302    3976 addons.go:510] duration metric: took 66.388858ms for enable addons: enabled=[]
	I0818 12:09:19.284387    3976 start.go:246] waiting for cluster config update ...
	I0818 12:09:19.284400    3976 start.go:255] writing updated cluster config ...
	I0818 12:09:19.306484    3976 out.go:201] 
	I0818 12:09:19.327608    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.327742    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.350369    3976 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:09:19.392104    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:19.392164    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:19.392336    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:19.392355    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:19.392486    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.393415    3976 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:19.393522    3976 start.go:364] duration metric: took 80.918µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:09:19.393546    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:19.393556    3976 fix.go:54] fixHost starting: m02
	I0818 12:09:19.393965    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:19.393990    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:19.403655    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0818 12:09:19.404217    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:19.404634    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:19.404650    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:19.405004    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:19.405118    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.405222    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:19.405303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.405380    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:19.406287    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.406302    3976 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:09:19.406312    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	W0818 12:09:19.406463    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:19.448356    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:09:19.469229    3976 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:09:19.469501    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.469542    3976 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:09:19.471314    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.471327    3976 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3847 is in state "Stopped"
	I0818 12:09:19.471351    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:09:19.471584    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:09:19.500704    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:09:19.500730    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:19.500855    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500929    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:19.500977    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:19.500998    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:19.502361    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Pid is 3997
	I0818 12:09:19.502828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:09:19.502885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.502920    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:09:19.504725    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:09:19.504780    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:19.504828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:09:19.504848    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:19.504870    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:19.504882    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:19.504895    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:09:19.504900    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:09:19.504907    3976 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:09:19.505665    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:19.505858    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.506316    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:19.506328    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.506474    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:19.506602    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:19.506707    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506790    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506894    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:19.507039    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:19.507197    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:19.507205    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:19.510551    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:19.519215    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:19.520168    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:19.520203    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:19.520228    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:19.520254    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:19.902342    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:19.902357    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:20.017440    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:20.017463    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:20.017471    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:20.017477    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:20.018303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:20.018315    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:25.632462    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:25.632549    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:25.632559    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:25.657887    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:28.954523    3976 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0818 12:09:32.012675    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:32.012690    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012844    3976 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:09:32.012857    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012969    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.013100    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.013206    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013295    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013399    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.013577    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.013797    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.013807    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:09:32.083655    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:09:32.083671    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.083802    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.083888    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.083968    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.084051    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.084177    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.084328    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.084343    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:32.145743    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:32.145757    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:32.145771    3976 buildroot.go:174] setting up certificates
	I0818 12:09:32.145778    3976 provision.go:84] configureAuth start
	I0818 12:09:32.145785    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.145913    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:32.146013    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.146119    3976 provision.go:143] copyHostCerts
	I0818 12:09:32.146155    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146207    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:32.146213    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146346    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:32.146563    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146599    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:32.146604    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146673    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:32.146816    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146847    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:32.146852    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146916    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:32.147063    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:09:32.439235    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:32.439288    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:32.439303    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.439451    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.439555    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.439662    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.439767    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:32.473899    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:32.473971    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:32.492902    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:32.492977    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:09:32.512205    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:32.512269    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 12:09:32.531269    3976 provision.go:87] duration metric: took 385.496037ms to configureAuth
	I0818 12:09:32.531282    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:32.531440    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:32.531454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:32.531586    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.531687    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.531797    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531905    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531985    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.532087    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.532212    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.532220    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:32.586134    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:32.586145    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:32.586228    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:32.586239    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.586366    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.586454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586566    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586649    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.586801    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.586940    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.586986    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:32.654663    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:32.654688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.654820    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.654904    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.654974    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.655053    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.655180    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.655330    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.655343    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:34.321102    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:34.321115    3976 machine.go:96] duration metric: took 14.8152512s to provisionDockerMachine
	I0818 12:09:34.321123    3976 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:09:34.321131    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:34.321140    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.321324    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:34.321348    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.321440    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.321528    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.321619    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.321715    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.356724    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:34.363921    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:34.363935    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:34.364038    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:34.364185    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:34.364192    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:34.364347    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:34.379409    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:34.407459    3976 start.go:296] duration metric: took 86.328927ms for postStartSetup
	I0818 12:09:34.407481    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.407638    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:34.407658    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.407738    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.407823    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.407908    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.407985    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.441305    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:34.441365    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:34.475756    3976 fix.go:56] duration metric: took 15.082665832s for fixHost
	I0818 12:09:34.475780    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.475917    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.476014    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476109    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476204    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.476334    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:34.476475    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:34.476483    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:34.531245    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008174.705830135
	
	I0818 12:09:34.531256    3976 fix.go:216] guest clock: 1724008174.705830135
	I0818 12:09:34.531265    3976 fix.go:229] Guest: 2024-08-18 12:09:34.705830135 -0700 PDT Remote: 2024-08-18 12:09:34.475769 -0700 PDT m=+34.122913514 (delta=230.061135ms)
	I0818 12:09:34.531276    3976 fix.go:200] guest clock delta is within tolerance: 230.061135ms
	I0818 12:09:34.531281    3976 start.go:83] releasing machines lock for "ha-373000-m02", held for 15.138221498s
	I0818 12:09:34.531298    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.531428    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:34.555141    3976 out.go:177] * Found network options:
	I0818 12:09:34.576875    3976 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:09:34.597784    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.597830    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598932    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.599031    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:34.599086    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:09:34.599150    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.599257    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:09:34.599278    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.599308    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599482    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599521    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599684    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599720    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599871    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599921    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.600032    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:09:34.631739    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:34.631799    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:34.677593    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:34.677615    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.677737    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:34.693773    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:34.702951    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:34.711799    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:34.711840    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:34.720906    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.729957    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:34.738902    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.747932    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:34.757312    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:34.766375    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:34.775307    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:34.784400    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:34.792630    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:34.801021    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:34.911872    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:34.930682    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.930753    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:34.944697    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.956782    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:34.974233    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.986114    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:34.998297    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:35.018378    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:35.029759    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:35.044553    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:35.047654    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:35.055897    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:35.069339    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:35.163048    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:35.263866    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:35.263888    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:35.281642    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:35.375004    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:10:36.400829    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.027707476s)
	I0818 12:10:36.400907    3976 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:10:36.437434    3976 out.go:201] 
	W0818 12:10:36.459246    3976 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:10:36.459348    3976 out.go:270] * 
	W0818 12:10:36.460605    3976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:10:36.503171    3976 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453438095Z" level=info msg="shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453510509Z" level=warning msg="cleaning up after shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453519178Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1169]: time="2024-08-18T19:09:46.453809871Z" level=info msg="ignoring event" container=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461847284Z" level=info msg="shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461900797Z" level=warning msg="cleaning up after shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461909634Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1169]: time="2024-08-18T19:09:47.462210879Z" level=info msg="ignoring event" container=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870147305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870333575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870347019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870447403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866261869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866371963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866695913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508003021Z" level=info msg="shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508123156Z" level=warning msg="cleaning up after shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508131683Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.508457282Z" level=info msg="ignoring event" container=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.520349911Z" level=warning msg="cleanup warnings time=\"2024-08-18T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569676280Z" level=info msg="shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569734174Z" level=warning msg="cleaning up after shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569742722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.569876898Z" level=info msg="ignoring event" container=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e27aa53db964       604f5db92eaa8       36 seconds ago       Exited              kube-apiserver            3                   5f2bcb86e47be       kube-apiserver-ha-373000
	24788de6a779b       045733566833c       38 seconds ago       Exited              kube-controller-manager   4                   45b85b05f9eab       kube-controller-manager-ha-373000
	e7bf93d680505       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   37cbb7af9134a       kube-vip-ha-373000
	5bb7217cec87f       1766f54c897f0       About a minute ago   Running             kube-scheduler            2                   11d6e68c74890       kube-scheduler-ha-373000
	4ad014ace2b0a       2e96e5913fc06       About a minute ago   Running             etcd                      2                   4905344ca55ee       etcd-ha-373000
	eb459a6cac5c5       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	09b8ded75e80f       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e       5 minutes ago        Exited              kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4       5 minutes ago        Exited              kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	ebe78e53d91d8       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0       5 minutes ago        Exited              kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0818 19:10:43.314178    3043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:43.316042    3043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:43.317743    3043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:43.319087    3043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:43.320890    3043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035419] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007963] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691053] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000000] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.891457] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.229875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.300202] systemd-fstab-generator[467]: Ignoring "noauto" option for root device
	[  +0.101114] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +2.001813] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.247527] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.100995] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.114396] systemd-fstab-generator[1161]: Ignoring "noauto" option for root device
	[  +0.050935] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.471749] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.100344] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.088670] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.117158] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.433473] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +6.511307] kauditd_printk_skb: 168 callbacks suppressed
	[ +21.355887] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [4ad014ace2b0] <==
	{"level":"warn","ts":"2024-08-18T19:10:37.910246Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:38.411386Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:38.911588Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-18T19:10:39.239512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:39.239557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:39.412430Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:39.917438Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:40.418292Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400528,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:40.523142Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:40.523187Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:40.889538Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-08-18T19:10:40.890745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.003668586s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-18T19:10:40.890903Z","caller":"traceutil/trace.go:171","msg":"trace[1509650294] range","detail":"{range_begin:; range_end:; }","duration":"7.003839701s","start":"2024-08-18T19:10:33.887048Z","end":"2024-08-18T19:10:40.890888Z","steps":["trace[1509650294] 'agreement among raft nodes before linearized reading'  (duration: 7.003664446s)"],"step_count":1}
	{"level":"error","ts":"2024-08-18T19:10:40.891075Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-18T19:10:41.039575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:42.508620Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-373000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-18T19:10:42.840252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:08:52.581006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:49.021104Z","time spent":"3.559898593s","remote":"127.0.0.1:56420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.986891044s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581101Z","caller":"traceutil/trace.go:171","msg":"trace[1676890744] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"6.986905757s","start":"2024-08-18T19:08:45.594192Z","end":"2024-08-18T19:08:52.581098Z","steps":["trace[1676890744] 'agreement among raft nodes before linearized reading'  (duration: 6.986891942s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:45.594168Z","time spent":"6.986940437s","remote":"127.0.0.1:56392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.633749967s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581170Z","caller":"traceutil/trace.go:171","msg":"trace[1130682409] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"5.633762365s","start":"2024-08-18T19:08:46.947405Z","end":"2024-08-18T19:08:52.581167Z","steps":["trace[1130682409] 'agreement among raft nodes before linearized reading'  (duration: 5.633750027s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581180Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:46.947373Z","time spent":"5.633803888s","remote":"127.0.0.1:56504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:52.217567Z","time spent":"363.656855ms","remote":"127.0.0.1:56498","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.608176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:08:52.608248Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:08:52.608286Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:08:52.608395Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608428Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608446Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608595Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.610214Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610348Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 19:10:43 up 1 min,  0 users,  load average: 0.16, 0.10, 0.04
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318236       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:23.318272       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318358       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:23.318384       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:23.318431       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:23.318455       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:23.318492       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:23.318516       1 main.go:299] handling current node
	I0818 19:08:33.318121       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:33.318160       1 main.go:299] handling current node
	I0818 19:08:33.318171       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:33.318175       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:33.318256       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:33.318261       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314133       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:43.314185       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314278       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:43.314360       1 main.go:299] handling current node
	I0818 19:08:43.314444       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:43.314482       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7e27aa53db96] <==
	I0818 19:10:06.959907       1 options.go:228] external host was not specified, using 192.169.0.5
	I0818 19:10:06.961347       1 server.go:142] Version: v1.31.0
	I0818 19:10:06.961387       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:07.546161       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:10:07.549946       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:10:07.552371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:10:07.552381       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:10:07.552555       1 instance.go:232] Using reconciler: lease
	W0818 19:10:27.545475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:10:27.545529       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:10:27.554420       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0818 19:10:27.554432       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [24788de6a779] <==
	I0818 19:10:05.103965       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:10:05.483625       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:10:05.483663       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:05.484840       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:10:05.484954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:10:05.484863       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:10:05.485038       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:10:27.488487       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5bb7217cec87] <==
	E0818 19:10:28.561278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48758->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer
	W0818 19:10:28.562424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0818 19:10:28.562490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563330       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563535       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.680351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:28.680700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:10:29.353980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:29.354184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:08:52.663744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0818 19:08:52.664520       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:08:52.664805       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0818 19:08:52.665618       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.363782    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:26 ha-373000 kubelet[1587]: E0818 19:10:26.364415    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.784218    1587 scope.go:117] "RemoveContainer" containerID="6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.785184    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.785346    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.792994    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.793092    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.797222    1587 scope.go:117] "RemoveContainer" containerID="0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e"
	Aug 18 19:10:28 ha-373000 kubelet[1587]: E0818 19:10:28.895968    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: I0818 19:10:29.422762    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: E0818 19:10:29.423034    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:32 ha-373000 kubelet[1587]: E0818 19:10:32.507755    1587 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-373000.17ece84946cc9aa1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-373000,UID:ha-373000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-373000,},FirstTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,LastTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-373000,}"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.051084    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: E0818 19:10:33.051343    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.366165    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: I0818 19:10:34.986485    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: E0818 19:10:34.987184    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579478    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579545    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: I0818 19:10:35.861801    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.861956    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: W0818 19:10:38.650520    1587 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.650612    1587 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.896386    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:42 ha-373000 kubelet[1587]: I0818 19:10:42.580270    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000: exit status 2 (149.997629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-373000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (2.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-373000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-373000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-373000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-373000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-373000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-373000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-373000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-373000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-373000 -n ha-373000: exit status 2 (153.402369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 logs -n 25: (2.100587947s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m04 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp testdata/cp-test.txt                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000 sudo cat                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m02 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | ha-373000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-373000 ssh -n ha-373000-m03 sudo cat                                                                                      | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-373000 node stop m02 -v=7                                                                                                 | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:03 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-373000 node start m02 -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:03 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000 -v=7                                                                                                       | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-373000 -v=7                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT | 18 Aug 24 12:04 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true -v=7                                                                                                | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:04 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-373000                                                                                                            | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT |                     |
	| node    | ha-373000 node delete m03 -v=7                                                                                               | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-373000 stop -v=7                                                                                                          | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:08 PDT | 18 Aug 24 12:09 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-373000 --wait=true                                                                                                     | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:09 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-373000                                                                                                             | ha-373000 | jenkins | v1.33.1 | 18 Aug 24 12:10 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 12:09:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 12:09:00.388954    3976 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.389224    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389230    3976 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.389234    3976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.389403    3976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.390788    3976 out.go:352] Setting JSON to false
	I0818 12:09:00.412980    3976 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2311,"bootTime":1724005829,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 12:09:00.413073    3976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 12:09:00.435491    3976 out.go:177] * [ha-373000] minikube v1.33.1 on Darwin 14.6.1
	I0818 12:09:00.478012    3976 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 12:09:00.478041    3976 notify.go:220] Checking for updates...
	I0818 12:09:00.520842    3976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:00.541902    3976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 12:09:00.562974    3976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 12:09:00.583978    3976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 12:09:00.604937    3976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 12:09:00.626633    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.627309    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.627392    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.636929    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52014
	I0818 12:09:00.637287    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.637735    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.637744    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.637948    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.638063    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.638277    3976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 12:09:00.638525    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.638545    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.646880    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52016
	I0818 12:09:00.647224    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.647595    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.647613    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.647826    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.647950    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.676977    3976 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 12:09:00.718931    3976 start.go:297] selected driver: hyperkit
	I0818 12:09:00.718961    3976 start.go:901] validating driver "hyperkit" against &{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.719183    3976 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 12:09:00.719386    3976 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.719595    3976 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 12:09:00.729307    3976 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 12:09:00.733175    3976 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.733199    3976 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 12:09:00.735834    3976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 12:09:00.735880    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:00.735888    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:00.735960    3976 start.go:340] cluster config:
	{Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:00.736064    3976 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 12:09:00.757023    3976 out.go:177] * Starting "ha-373000" primary control-plane node in "ha-373000" cluster
	I0818 12:09:00.777783    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:00.777901    3976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 12:09:00.777924    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:00.778128    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:00.778148    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:00.778333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.779289    3976 start.go:360] acquireMachinesLock for ha-373000: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:00.779483    3976 start.go:364] duration metric: took 143.76µs to acquireMachinesLock for "ha-373000"
	I0818 12:09:00.779521    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:00.779537    3976 fix.go:54] fixHost starting: 
	I0818 12:09:00.779956    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.779984    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.789309    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52018
	I0818 12:09:00.789666    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.790031    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.790040    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.790251    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.790366    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.790468    3976 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.790556    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.790639    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.791548    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.791593    3976 fix.go:112] recreateIfNeeded on ha-373000: state=Stopped err=<nil>
	I0818 12:09:00.791619    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	W0818 12:09:00.791703    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:00.833742    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000" ...
	I0818 12:09:00.854617    3976 main.go:141] libmachine: (ha-373000) Calling .Start
	I0818 12:09:00.854890    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.854917    3976 main.go:141] libmachine: (ha-373000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid
	I0818 12:09:00.856657    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.856693    3976 main.go:141] libmachine: (ha-373000) DBG | pid 3836 is in state "Stopped"
	I0818 12:09:00.856718    3976 main.go:141] libmachine: (ha-373000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid...
	I0818 12:09:00.856984    3976 main.go:141] libmachine: (ha-373000) DBG | Using UUID 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df
	I0818 12:09:00.989123    3976 main.go:141] libmachine: (ha-373000) DBG | Generated MAC be:21:66:25:9a:b1
	I0818 12:09:00.989174    3976 main.go:141] libmachine: (ha-373000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:00.989237    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989280    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2540)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:00.989323    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2f6e5f86-d003-4f9b-8f55-d5f48a14c3df", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:00.989366    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2f6e5f86-d003-4f9b-8f55-d5f48a14c3df -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/ha-373000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:00.989381    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:00.990799    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 DEBUG: hyperkit: Pid is 3990
	I0818 12:09:00.991176    3976 main.go:141] libmachine: (ha-373000) DBG | Attempt 0
	I0818 12:09:00.991196    3976 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.991218    3976 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3990
	I0818 12:09:00.993000    3976 main.go:141] libmachine: (ha-373000) DBG | Searching for be:21:66:25:9a:b1 in /var/db/dhcpd_leases ...
	I0818 12:09:00.993068    3976 main.go:141] libmachine: (ha-373000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:00.993082    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:00.993090    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:00.993097    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:00.993119    3976 main.go:141] libmachine: (ha-373000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39749}
	I0818 12:09:00.993129    3976 main.go:141] libmachine: (ha-373000) DBG | Found match: be:21:66:25:9a:b1
	I0818 12:09:00.993139    3976 main.go:141] libmachine: (ha-373000) DBG | IP: 192.169.0.5
	I0818 12:09:00.993184    3976 main.go:141] libmachine: (ha-373000) Calling .GetConfigRaw
	I0818 12:09:00.994094    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:00.994333    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:00.994945    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:00.994967    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:00.995142    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:00.995271    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:00.995391    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995521    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:00.995632    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:00.995768    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:00.996051    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:00.996062    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:00.999904    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:01.080830    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:01.081571    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.081587    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.081595    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.081604    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.460230    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:01.460268    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:01.574713    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:01.574755    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:01.574768    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:01.574787    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:01.575699    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:01.575710    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:07.163001    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:07.163029    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:07.163053    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:07.186829    3976 main.go:141] libmachine: (ha-373000) DBG | 2024/08/18 12:09:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:12.062770    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:12.062784    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.062975    3976 buildroot.go:166] provisioning hostname "ha-373000"
	I0818 12:09:12.062986    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.063087    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.063175    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.063280    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063371    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.063480    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.063605    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.063750    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.063759    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000 && echo "ha-373000" | sudo tee /etc/hostname
	I0818 12:09:12.131801    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000
	
	I0818 12:09:12.131819    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.131954    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.132061    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132144    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.132224    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.132376    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.132528    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.132546    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:12.199331    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:12.199349    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:12.199369    3976 buildroot.go:174] setting up certificates
	I0818 12:09:12.199383    3976 provision.go:84] configureAuth start
	I0818 12:09:12.199391    3976 main.go:141] libmachine: (ha-373000) Calling .GetMachineName
	I0818 12:09:12.199540    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:12.199634    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.199719    3976 provision.go:143] copyHostCerts
	I0818 12:09:12.199749    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199819    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:12.199828    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:12.199960    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:12.200176    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200222    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:12.200227    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:12.200306    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:12.200461    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200505    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:12.200509    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:12.200584    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:12.200731    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000 san=[127.0.0.1 192.169.0.5 ha-373000 localhost minikube]
	I0818 12:09:12.289022    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:12.289076    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:12.289091    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.289227    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.289322    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.289416    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.289508    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:12.325856    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:12.325929    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:12.345953    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:12.346012    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 12:09:12.366027    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:12.366092    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 12:09:12.386212    3976 provision.go:87] duration metric: took 186.823558ms to configureAuth
	I0818 12:09:12.386225    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:12.386405    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:12.386418    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:12.386551    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.386643    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.386731    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386817    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.386909    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.387025    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.387159    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.387167    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:12.445833    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:12.445851    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:12.445930    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:12.445943    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.446067    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.446173    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446279    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.446389    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.446543    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.446679    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.446725    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:12.516077    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:12.516100    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:12.516233    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:12.516348    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516437    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:12.516526    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:12.516667    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:12.516813    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:12.516825    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:14.219167    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:14.219181    3976 machine.go:96] duration metric: took 13.22463913s to provisionDockerMachine
	I0818 12:09:14.219193    3976 start.go:293] postStartSetup for "ha-373000" (driver="hyperkit")
	I0818 12:09:14.219201    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:14.219211    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.219390    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:14.219417    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.219519    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.219630    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.219724    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.219808    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.259561    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:14.263959    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:14.263976    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:14.264080    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:14.264273    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:14.264280    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:14.264487    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:14.272283    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:14.302942    3976 start.go:296] duration metric: took 83.742133ms for postStartSetup
	I0818 12:09:14.302965    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.303146    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:14.303160    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.303248    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.303361    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.303436    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.303526    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.338080    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:14.338142    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:14.391638    3976 fix.go:56] duration metric: took 13.612527396s for fixHost
	I0818 12:09:14.391662    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.391810    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.391899    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.391991    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.392074    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.392222    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:14.392364    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0818 12:09:14.392372    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:14.449746    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008154.620949792
	
	I0818 12:09:14.449760    3976 fix.go:216] guest clock: 1724008154.620949792
	I0818 12:09:14.449772    3976 fix.go:229] Guest: 2024-08-18 12:09:14.620949792 -0700 PDT Remote: 2024-08-18 12:09:14.391652 -0700 PDT m=+14.038170292 (delta=229.297792ms)
	I0818 12:09:14.449789    3976 fix.go:200] guest clock delta is within tolerance: 229.297792ms
	I0818 12:09:14.449793    3976 start.go:83] releasing machines lock for "ha-373000", held for 13.670724274s
	I0818 12:09:14.449812    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.449942    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:14.450037    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450349    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450474    3976 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:09:14.450548    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:14.450580    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450637    3976 ssh_runner.go:195] Run: cat /version.json
	I0818 12:09:14.450648    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:09:14.450688    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450746    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:09:14.450782    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450836    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:09:14.450854    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450935    3976 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:09:14.450952    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.451045    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:09:14.541757    3976 ssh_runner.go:195] Run: systemctl --version
	I0818 12:09:14.546793    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 12:09:14.550801    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:14.550839    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:14.564129    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:14.564141    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.564243    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.581664    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:14.590425    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:14.599077    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:14.599120    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:14.607868    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.616526    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:14.625074    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:14.633725    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:14.642461    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:14.651030    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:14.659717    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:14.668509    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:14.676419    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:14.684357    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:14.777696    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:14.795379    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:14.795465    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:14.808091    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.819351    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:14.834858    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:14.845068    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.855088    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:14.879151    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:14.889782    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:14.904555    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:14.907616    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:14.914893    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:14.928498    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:15.021302    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:15.126534    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:15.126611    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:15.141437    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:15.238491    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:09:17.633635    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.395193434s)
	I0818 12:09:17.633701    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 12:09:17.644119    3976 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 12:09:17.657413    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.668074    3976 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 12:09:17.762478    3976 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 12:09:17.858367    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:17.948600    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 12:09:17.962148    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 12:09:17.972120    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.070649    3976 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 12:09:18.132791    3976 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 12:09:18.132869    3976 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 12:09:18.137140    3976 start.go:563] Will wait 60s for crictl version
	I0818 12:09:18.137200    3976 ssh_runner.go:195] Run: which crictl
	I0818 12:09:18.140608    3976 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 12:09:18.167352    3976 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 12:09:18.167422    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.186476    3976 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 12:09:18.224169    3976 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 12:09:18.224214    3976 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:09:18.224595    3976 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0818 12:09:18.229086    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.238631    3976 kubeadm.go:883] updating cluster {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 12:09:18.238717    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:18.238780    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.252546    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.252557    3976 docker.go:615] Images already preloaded, skipping extraction
	I0818 12:09:18.252627    3976 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 12:09:18.266684    3976 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0818 12:09:18.266703    3976 cache_images.go:84] Images are preloaded, skipping loading
	I0818 12:09:18.266713    3976 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.31.0 docker true true} ...
	I0818 12:09:18.266790    3976 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 12:09:18.266861    3976 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 12:09:18.304192    3976 cni.go:84] Creating CNI manager for ""
	I0818 12:09:18.304204    3976 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 12:09:18.304213    3976 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 12:09:18.304229    3976 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-373000 NodeName:ha-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 12:09:18.304320    3976 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-373000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 12:09:18.304334    3976 kube-vip.go:115] generating kube-vip config ...
	I0818 12:09:18.304382    3976 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 12:09:18.316732    3976 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 12:09:18.316793    3976 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 12:09:18.316840    3976 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 12:09:18.324597    3976 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 12:09:18.324641    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 12:09:18.331779    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0818 12:09:18.345158    3976 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 12:09:18.358298    3976 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0818 12:09:18.372286    3976 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0818 12:09:18.385485    3976 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0818 12:09:18.388341    3976 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 12:09:18.397526    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:18.496612    3976 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 12:09:18.511160    3976 certs.go:68] Setting up /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000 for IP: 192.169.0.5
	I0818 12:09:18.511172    3976 certs.go:194] generating shared ca certs ...
	I0818 12:09:18.511184    3976 certs.go:226] acquiring lock for ca certs: {Name:mkf9c4ce6e92bb713042213e4605fc93500c1ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.511356    3976 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key
	I0818 12:09:18.511436    3976 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key
	I0818 12:09:18.511446    3976 certs.go:256] generating profile certs ...
	I0818 12:09:18.511538    3976 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key
	I0818 12:09:18.511564    3976 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69
	I0818 12:09:18.511579    3976 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0818 12:09:18.678090    3976 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 ...
	I0818 12:09:18.678108    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69: {Name:mk412ce60d50ec37c24febde03f7225e8a48a24c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678466    3976 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 ...
	I0818 12:09:18.678480    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69: {Name:mke31239238122280f7cbf00316b2acd43533e9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:18.678743    3976 certs.go:381] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt
	I0818 12:09:18.678987    3976 certs.go:385] copying /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key.55d44b69 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key
	I0818 12:09:18.679293    3976 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key
	I0818 12:09:18.679306    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 12:09:18.679332    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 12:09:18.679353    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 12:09:18.679374    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 12:09:18.679394    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 12:09:18.679414    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 12:09:18.679441    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 12:09:18.679462    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 12:09:18.679567    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem (1338 bytes)
	W0818 12:09:18.679618    3976 certs.go:480] ignoring /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526_empty.pem, impossibly tiny 0 bytes
	I0818 12:09:18.679629    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 12:09:18.679662    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem (1082 bytes)
	I0818 12:09:18.679695    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem (1123 bytes)
	I0818 12:09:18.679735    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem (1675 bytes)
	I0818 12:09:18.679815    3976 certs.go:484] found cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:18.679851    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /usr/share/ca-certificates/15262.pem
	I0818 12:09:18.679895    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:18.679917    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem -> /usr/share/ca-certificates/1526.pem
	I0818 12:09:18.680416    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 12:09:18.731491    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 12:09:18.777149    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 12:09:18.836957    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 12:09:18.879727    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 12:09:18.904838    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 12:09:18.933787    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 12:09:18.969389    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 12:09:18.994753    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /usr/share/ca-certificates/15262.pem (1708 bytes)
	I0818 12:09:19.013849    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 12:09:19.033471    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/1526.pem --> /usr/share/ca-certificates/1526.pem (1338 bytes)
	I0818 12:09:19.052595    3976 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 12:09:19.066128    3976 ssh_runner.go:195] Run: openssl version
	I0818 12:09:19.070271    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15262.pem && ln -fs /usr/share/ca-certificates/15262.pem /etc/ssl/certs/15262.pem"
	I0818 12:09:19.079228    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082728    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:54 /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.082763    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15262.pem
	I0818 12:09:19.086877    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15262.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 12:09:19.095804    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 12:09:19.104889    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108208    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.108241    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 12:09:19.112406    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 12:09:19.121720    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1526.pem && ln -fs /usr/share/ca-certificates/1526.pem /etc/ssl/certs/1526.pem"
	I0818 12:09:19.130845    3976 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134345    3976 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:54 /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.134389    3976 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1526.pem
	I0818 12:09:19.138941    3976 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1526.pem /etc/ssl/certs/51391683.0"
	I0818 12:09:19.148376    3976 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 12:09:19.151715    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 12:09:19.155985    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 12:09:19.160273    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 12:09:19.165064    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 12:09:19.169962    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 12:09:19.174244    3976 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 12:09:19.178473    3976 kubeadm.go:392] StartCluster: {Name:ha-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:ha-373000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 12:09:19.178593    3976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 12:09:19.190838    3976 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 12:09:19.199172    3976 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 12:09:19.199186    3976 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 12:09:19.199227    3976 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 12:09:19.207402    3976 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:09:19.207710    3976 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-373000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.207791    3976 kubeconfig.go:62] /Users/jenkins/minikube-integration/19423-1007/kubeconfig needs updating (will repair): [kubeconfig missing "ha-373000" cluster setting kubeconfig missing "ha-373000" context setting]
	I0818 12:09:19.207967    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.208584    3976 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.208770    3976 kapi.go:59] client config for ha-373000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19423-1007/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x52acf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 12:09:19.209064    3976 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 12:09:19.209255    3976 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 12:09:19.217108    3976 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0818 12:09:19.217125    3976 kubeadm.go:597] duration metric: took 17.934031ms to restartPrimaryControlPlane
	I0818 12:09:19.217132    3976 kubeadm.go:394] duration metric: took 38.665023ms to StartCluster
	I0818 12:09:19.217145    3976 settings.go:142] acquiring lock: {Name:mk54874f9a6ab5b428c8697c9ef71cbcdde2d89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217216    3976 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 12:09:19.217617    3976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/kubeconfig: {Name:mk884388b2510ace5b23e8fdd3343cda9524a1f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 12:09:19.217869    3976 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 12:09:19.217886    3976 start.go:241] waiting for startup goroutines ...
	I0818 12:09:19.217906    3976 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 12:09:19.217983    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.263234    3976 out.go:177] * Enabled addons: 
	I0818 12:09:19.284302    3976 addons.go:510] duration metric: took 66.388858ms for enable addons: enabled=[]
	I0818 12:09:19.284387    3976 start.go:246] waiting for cluster config update ...
	I0818 12:09:19.284400    3976 start.go:255] writing updated cluster config ...
	I0818 12:09:19.306484    3976 out.go:201] 
	I0818 12:09:19.327608    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:19.327742    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.350369    3976 out.go:177] * Starting "ha-373000-m02" control-plane node in "ha-373000" cluster
	I0818 12:09:19.392104    3976 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 12:09:19.392164    3976 cache.go:56] Caching tarball of preloaded images
	I0818 12:09:19.392336    3976 preload.go:172] Found /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 12:09:19.392355    3976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 12:09:19.392486    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.393415    3976 start.go:360] acquireMachinesLock for ha-373000-m02: {Name:mk31060d3397796eb838e7923e770e40ab40e545 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 12:09:19.393522    3976 start.go:364] duration metric: took 80.918µs to acquireMachinesLock for "ha-373000-m02"
	I0818 12:09:19.393546    3976 start.go:96] Skipping create...Using existing machine configuration
	I0818 12:09:19.393556    3976 fix.go:54] fixHost starting: m02
	I0818 12:09:19.393965    3976 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:19.393990    3976 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:19.403655    3976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52040
	I0818 12:09:19.404217    3976 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:19.404634    3976 main.go:141] libmachine: Using API Version  1
	I0818 12:09:19.404650    3976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:19.405004    3976 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:19.405118    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.405222    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:19.405303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.405380    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:19.406287    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.406302    3976 fix.go:112] recreateIfNeeded on ha-373000-m02: state=Stopped err=<nil>
	I0818 12:09:19.406312    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	W0818 12:09:19.406463    3976 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 12:09:19.448356    3976 out.go:177] * Restarting existing hyperkit VM for "ha-373000-m02" ...
	I0818 12:09:19.469229    3976 main.go:141] libmachine: (ha-373000-m02) Calling .Start
	I0818 12:09:19.469501    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.469542    3976 main.go:141] libmachine: (ha-373000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid
	I0818 12:09:19.471314    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:19.471327    3976 main.go:141] libmachine: (ha-373000-m02) DBG | pid 3847 is in state "Stopped"
	I0818 12:09:19.471351    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid...
	I0818 12:09:19.471584    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Using UUID 7a237572-4e62-4b98-a476-83254bfde967
	I0818 12:09:19.500704    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Generated MAC ca:b5:c4:e6:47:79
	I0818 12:09:19.500730    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000
	I0818 12:09:19.500855    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7a237572-4e62-4b98-a476-83254bfde967", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0818 12:09:19.500929    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7a237572-4e62-4b98-a476-83254bfde967", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machine
s/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"}
	I0818 12:09:19.500977    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7a237572-4e62-4b98-a476-83254bfde967 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/ha-373000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/tty,log=/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/bzimage,/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-373000"
	I0818 12:09:19.500998    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0818 12:09:19.502361    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 DEBUG: hyperkit: Pid is 3997
	I0818 12:09:19.502828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Attempt 0
	I0818 12:09:19.502885    3976 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:19.502920    3976 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3997
	I0818 12:09:19.504725    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Searching for ca:b5:c4:e6:47:79 in /var/db/dhcpd_leases ...
	I0818 12:09:19.504780    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0818 12:09:19.504828    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:be:21:66:25:9a:b1 ID:1,be:21:66:25:9a:b1 Lease:0x66c39856}
	I0818 12:09:19.504848    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:72:9e:9b:7f:e6:a8 ID:1,72:9e:9b:7f:e6:a8 Lease:0x66c246ad}
	I0818 12:09:19.504870    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:f2:8c:91:ee:dd:c0 ID:1,f2:8c:91:ee:dd:c0 Lease:0x66c397e0}
	I0818 12:09:19.504882    3976 main.go:141] libmachine: (ha-373000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:ca:b5:c4:e6:47:79 ID:1,ca:b5:c4:e6:47:79 Lease:0x66c3975b}
	I0818 12:09:19.504895    3976 main.go:141] libmachine: (ha-373000-m02) DBG | Found match: ca:b5:c4:e6:47:79
	I0818 12:09:19.504900    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetConfigRaw
	I0818 12:09:19.504907    3976 main.go:141] libmachine: (ha-373000-m02) DBG | IP: 192.169.0.6
	I0818 12:09:19.505665    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:19.505858    3976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/ha-373000/config.json ...
	I0818 12:09:19.506316    3976 machine.go:93] provisionDockerMachine start ...
	I0818 12:09:19.506328    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:19.506474    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:19.506602    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:19.506707    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506790    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:19.506894    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:19.507039    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:19.507197    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:19.507205    3976 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 12:09:19.510551    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0818 12:09:19.519215    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0818 12:09:19.520168    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:19.520203    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:19.520228    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:19.520254    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:19.902342    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0818 12:09:19.902357    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0818 12:09:20.017440    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0818 12:09:20.017463    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0818 12:09:20.017471    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0818 12:09:20.017477    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0818 12:09:20.018303    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0818 12:09:20.018315    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0818 12:09:25.632462    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0818 12:09:25.632549    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0818 12:09:25.632559    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0818 12:09:25.657887    3976 main.go:141] libmachine: (ha-373000-m02) DBG | 2024/08/18 12:09:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0818 12:09:28.954523    3976 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.6:22: connect: connection refused
	I0818 12:09:32.012675    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 12:09:32.012690    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012844    3976 buildroot.go:166] provisioning hostname "ha-373000-m02"
	I0818 12:09:32.012857    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.012969    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.013100    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.013206    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013295    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.013399    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.013577    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.013797    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.013807    3976 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-373000-m02 && echo "ha-373000-m02" | sudo tee /etc/hostname
	I0818 12:09:32.083655    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-373000-m02
	
	I0818 12:09:32.083671    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.083802    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.083888    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.083968    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.084051    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.084177    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.084328    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.084343    3976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-373000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-373000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-373000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 12:09:32.145743    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 12:09:32.145757    3976 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19423-1007/.minikube CaCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19423-1007/.minikube}
	I0818 12:09:32.145771    3976 buildroot.go:174] setting up certificates
	I0818 12:09:32.145778    3976 provision.go:84] configureAuth start
	I0818 12:09:32.145785    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetMachineName
	I0818 12:09:32.145913    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:32.146013    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.146119    3976 provision.go:143] copyHostCerts
	I0818 12:09:32.146155    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146207    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem, removing ...
	I0818 12:09:32.146213    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem
	I0818 12:09:32.146346    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/ca.pem (1082 bytes)
	I0818 12:09:32.146563    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146599    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem, removing ...
	I0818 12:09:32.146604    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem
	I0818 12:09:32.146673    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/cert.pem (1123 bytes)
	I0818 12:09:32.146816    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146847    3976 exec_runner.go:144] found /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem, removing ...
	I0818 12:09:32.146852    3976 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem
	I0818 12:09:32.146916    3976 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19423-1007/.minikube/key.pem (1675 bytes)
	I0818 12:09:32.147063    3976 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca-key.pem org=jenkins.ha-373000-m02 san=[127.0.0.1 192.169.0.6 ha-373000-m02 localhost minikube]
	I0818 12:09:32.439235    3976 provision.go:177] copyRemoteCerts
	I0818 12:09:32.439288    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 12:09:32.439303    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.439451    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.439555    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.439662    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.439767    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:32.473899    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 12:09:32.473971    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 12:09:32.492902    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 12:09:32.492977    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 12:09:32.512205    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 12:09:32.512269    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 12:09:32.531269    3976 provision.go:87] duration metric: took 385.496037ms to configureAuth
	I0818 12:09:32.531282    3976 buildroot.go:189] setting minikube options for container-runtime
	I0818 12:09:32.531440    3976 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:32.531454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:32.531586    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.531687    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.531797    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531905    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.531985    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.532087    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.532212    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.532220    3976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 12:09:32.586134    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 12:09:32.586145    3976 buildroot.go:70] root file system type: tmpfs
	I0818 12:09:32.586228    3976 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 12:09:32.586239    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.586366    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.586454    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586566    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.586649    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.586801    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.586940    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.586986    3976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 12:09:32.654663    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 12:09:32.654688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:32.654820    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:32.654904    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.654974    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:32.655053    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:32.655180    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:32.655330    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:32.655343    3976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 12:09:34.321102    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0818 12:09:34.321115    3976 machine.go:96] duration metric: took 14.8152512s to provisionDockerMachine
	I0818 12:09:34.321123    3976 start.go:293] postStartSetup for "ha-373000-m02" (driver="hyperkit")
	I0818 12:09:34.321131    3976 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 12:09:34.321140    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.321324    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 12:09:34.321348    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.321440    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.321528    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.321619    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.321715    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.356724    3976 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 12:09:34.363921    3976 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 12:09:34.363935    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/addons for local assets ...
	I0818 12:09:34.364038    3976 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19423-1007/.minikube/files for local assets ...
	I0818 12:09:34.364185    3976 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> 15262.pem in /etc/ssl/certs
	I0818 12:09:34.364192    3976 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem -> /etc/ssl/certs/15262.pem
	I0818 12:09:34.364347    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 12:09:34.379409    3976 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/ssl/certs/15262.pem --> /etc/ssl/certs/15262.pem (1708 bytes)
	I0818 12:09:34.407459    3976 start.go:296] duration metric: took 86.328927ms for postStartSetup
	I0818 12:09:34.407481    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.407638    3976 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 12:09:34.407658    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.407738    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.407823    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.407908    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.407985    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.441305    3976 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0818 12:09:34.441365    3976 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0818 12:09:34.475756    3976 fix.go:56] duration metric: took 15.082665832s for fixHost
	I0818 12:09:34.475780    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.475917    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.476014    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476109    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.476204    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.476334    3976 main.go:141] libmachine: Using SSH client type: native
	I0818 12:09:34.476475    3976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf3ea0] 0x3bf6c00 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0818 12:09:34.476483    3976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 12:09:34.531245    3976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008174.705830135
	
	I0818 12:09:34.531256    3976 fix.go:216] guest clock: 1724008174.705830135
	I0818 12:09:34.531265    3976 fix.go:229] Guest: 2024-08-18 12:09:34.705830135 -0700 PDT Remote: 2024-08-18 12:09:34.475769 -0700 PDT m=+34.122913514 (delta=230.061135ms)
	I0818 12:09:34.531276    3976 fix.go:200] guest clock delta is within tolerance: 230.061135ms
	I0818 12:09:34.531281    3976 start.go:83] releasing machines lock for "ha-373000-m02", held for 15.138221498s
	I0818 12:09:34.531298    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.531428    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetIP
	I0818 12:09:34.555141    3976 out.go:177] * Found network options:
	I0818 12:09:34.576875    3976 out.go:177]   - NO_PROXY=192.169.0.5
	W0818 12:09:34.597784    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.597830    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598688    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.598932    3976 main.go:141] libmachine: (ha-373000-m02) Calling .DriverName
	I0818 12:09:34.599031    3976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 12:09:34.599086    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	W0818 12:09:34.599150    3976 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 12:09:34.599257    3976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 12:09:34.599278    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHHostname
	I0818 12:09:34.599308    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599482    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHPort
	I0818 12:09:34.599521    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599684    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHKeyPath
	I0818 12:09:34.599720    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599871    3976 main.go:141] libmachine: (ha-373000-m02) Calling .GetSSHUsername
	I0818 12:09:34.599921    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	I0818 12:09:34.600032    3976 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m02/id_rsa Username:docker}
	W0818 12:09:34.631739    3976 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 12:09:34.631799    3976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 12:09:34.677593    3976 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 12:09:34.677615    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.677737    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:34.693773    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 12:09:34.702951    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 12:09:34.711799    3976 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 12:09:34.711840    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 12:09:34.720906    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.729957    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 12:09:34.738902    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 12:09:34.747932    3976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 12:09:34.757312    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 12:09:34.766375    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 12:09:34.775307    3976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 12:09:34.784400    3976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 12:09:34.792630    3976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 12:09:34.801021    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:34.911872    3976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 12:09:34.930682    3976 start.go:495] detecting cgroup driver to use...
	I0818 12:09:34.930753    3976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 12:09:34.944697    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.956782    3976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 12:09:34.974233    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 12:09:34.986114    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:34.998297    3976 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 12:09:35.018378    3976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 12:09:35.029759    3976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 12:09:35.044553    3976 ssh_runner.go:195] Run: which cri-dockerd
	I0818 12:09:35.047654    3976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 12:09:35.055897    3976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 12:09:35.069339    3976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 12:09:35.163048    3976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 12:09:35.263866    3976 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 12:09:35.263888    3976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 12:09:35.281642    3976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 12:09:35.375004    3976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 12:10:36.400829    3976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.027707476s)
	I0818 12:10:36.400907    3976 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0818 12:10:36.437434    3976 out.go:201] 
	W0818 12:10:36.459246    3976 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:09:33 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.132734313Z" level=info msg="Starting up"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133217341Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:09:33 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:33.133706453Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.150884592Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165526624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165600672Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165665661Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165701505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165883163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.165980711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166114419Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166158739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166192923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166222480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166373263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.166624364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168284638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168338968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168477528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168522410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168684236Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.168742254Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172229271Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172291175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172328725Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172361584Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172397084Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172469115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172636000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172713269Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172756026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172790721Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172822478Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172857013Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172889097Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172923123Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172955052Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.172985350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173017995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173047134Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173082956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173138952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173171857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173266115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173303729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173337305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173367548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173397195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173426651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173461907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173491945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173521151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173551817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173584158Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173620017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173651734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173681138Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173753818Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173797160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173851051Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173888629Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173919044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173948712Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.173979628Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174202763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174288578Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174373231Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:09:33 ha-373000-m02 dockerd[503]: time="2024-08-18T19:09:33.174419718Z" level=info msg="containerd successfully booted in 0.024281s"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.163281667Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.193454663Z" level=info msg="Loading containers: start."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.358483324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.419779026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.464759087Z" level=info msg="Loading containers: done."
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475407585Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.475556691Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493178383Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:09:34 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:34.493236047Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:09:34 ha-373000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.562066100Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:09:35 ha-373000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563196599Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563381674Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563404669Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:09:35 ha-373000-m02 dockerd[496]: time="2024-08-18T19:09:35.563423915Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:09:36 ha-373000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:09:36 ha-373000-m02 dockerd[1172]: time="2024-08-18T19:09:36.603637435Z" level=info msg="Starting up"
	Aug 18 19:10:36 ha-373000-m02 dockerd[1172]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:10:36 ha-373000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0818 12:10:36.459348    3976 out.go:270] * 
	W0818 12:10:36.460605    3976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 12:10:36.503171    3976 out.go:201] 
	
	
	==> Docker <==
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453438095Z" level=info msg="shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453510509Z" level=warning msg="cleaning up after shim disconnected" id=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1177]: time="2024-08-18T19:09:46.453519178Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:46 ha-373000 dockerd[1169]: time="2024-08-18T19:09:46.453809871Z" level=info msg="ignoring event" container=6f6bc1d9592b465e3b7c4d9db0e74b67040cbbed37626775b4aad68af80ddeb2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461847284Z" level=info msg="shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461900797Z" level=warning msg="cleaning up after shim disconnected" id=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1177]: time="2024-08-18T19:09:47.461909634Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:09:47 ha-373000 dockerd[1169]: time="2024-08-18T19:09:47.462210879Z" level=info msg="ignoring event" container=0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870147305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870333575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870347019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:04 ha-373000 dockerd[1177]: time="2024-08-18T19:10:04.870447403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866261869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866358878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866371963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:06 ha-373000 dockerd[1177]: time="2024-08-18T19:10:06.866695913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508003021Z" level=info msg="shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508123156Z" level=warning msg="cleaning up after shim disconnected" id=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.508131683Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.508457282Z" level=info msg="ignoring event" container=24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.520349911Z" level=warning msg="cleanup warnings time=\"2024-08-18T19:10:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569676280Z" level=info msg="shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569734174Z" level=warning msg="cleaning up after shim disconnected" id=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1177]: time="2024-08-18T19:10:27.569742722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 19:10:27 ha-373000 dockerd[1169]: time="2024-08-18T19:10:27.569876898Z" level=info msg="ignoring event" container=7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e27aa53db964       604f5db92eaa8       39 seconds ago       Exited              kube-apiserver            3                   5f2bcb86e47be       kube-apiserver-ha-373000
	24788de6a779b       045733566833c       41 seconds ago       Exited              kube-controller-manager   4                   45b85b05f9eab       kube-controller-manager-ha-373000
	e7bf93d680505       38af8ddebf499       About a minute ago   Running             kube-vip                  1                   37cbb7af9134a       kube-vip-ha-373000
	5bb7217cec87f       1766f54c897f0       About a minute ago   Running             kube-scheduler            2                   11d6e68c74890       kube-scheduler-ha-373000
	4ad014ace2b0a       2e96e5913fc06       About a minute ago   Running             etcd                      2                   4905344ca55ee       etcd-ha-373000
	eb459a6cac5c5       6e38f40d628db       4 minutes ago        Exited              storage-provisioner       2                   3772c138aa65e       storage-provisioner
	fc1b30cd2c8f2       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   eb4ed9664dda9       busybox-7dff88458-hdg8r
	f3dbf3c176d9d       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   fc1f2fb60f7c5       coredns-6f6b679f8f-rcfmc
	09b8ded75e80f       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   bfce6a3dd1783       coredns-6f6b679f8f-hv98f
	530d580001894       ad83b2ca7b09e       5 minutes ago        Exited              kube-proxy                1                   c8f48c6f44e55       kube-proxy-2xkhp
	fbeef7aab770f       12968670680f4       5 minutes ago        Exited              kindnet-cni               1                   32a6ca59d02e7       kindnet-k4c4p
	ebe78e53d91d8       38af8ddebf499       5 minutes ago        Exited              kube-vip                  0                   32cc18cf0bf63       kube-vip-ha-373000
	a9e532272f1be       2e96e5913fc06       5 minutes ago        Exited              etcd                      1                   4c11500a40693       etcd-ha-373000
	de016fdbd6fe9       1766f54c897f0       5 minutes ago        Exited              kube-scheduler            1                   a3cc486386c46       kube-scheduler-ha-373000
	
	
	==> coredns [09b8ded75e80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54168 - 48100 "HINFO IN 5449853140043981156.1960656544577820065. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012696853s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317389180]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30002ms):
	Trace[1317389180]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:13.063)
	Trace[1317389180]: [30.002782846s] [30.002782846s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[804407349]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[804407349]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[804407349]: [30.003234686s] [30.003234686s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1407395902]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30001ms):
	Trace[1407395902]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:13.064)
	Trace[1407395902]: [30.001205512s] [30.001205512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f3dbf3c176d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48257 - 13179 "HINFO IN 3102078210809204073.2916918949998232158. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013387746s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1929152146]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[1929152146]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.063)
	Trace[1929152146]: [30.003742558s] [30.003742558s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[763765503]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.060) (total time: 30003ms):
	Trace[763765503]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (19:06:13.064)
	Trace[763765503]: [30.003508272s] [30.003508272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1437534784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:05:43.063) (total time: 30000ms):
	Trace[1437534784]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:13.064)
	Trace[1437534784]: [30.000417221s] [30.000417221s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0818 19:10:46.073407    3235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:46.074914    3235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:46.076555    3235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:46.077810    3235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0818 19:10:46.079287    3235 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035419] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007963] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.691053] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000000] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006881] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.891457] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.229875] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.300202] systemd-fstab-generator[467]: Ignoring "noauto" option for root device
	[  +0.101114] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +2.001813] systemd-fstab-generator[1098]: Ignoring "noauto" option for root device
	[  +0.247527] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.100995] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.114396] systemd-fstab-generator[1161]: Ignoring "noauto" option for root device
	[  +0.050935] kauditd_printk_skb: 145 callbacks suppressed
	[  +2.471749] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.100344] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.088670] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
	[  +0.117158] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
	[  +0.433473] systemd-fstab-generator[1580]: Ignoring "noauto" option for root device
	[  +6.511307] kauditd_printk_skb: 168 callbacks suppressed
	[ +21.355887] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [4ad014ace2b0] <==
	{"level":"warn","ts":"2024-08-18T19:10:40.523142Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:40.523187Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:40.889538Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-08-18T19:10:40.890745Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.003668586s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-18T19:10:40.890903Z","caller":"traceutil/trace.go:171","msg":"trace[1509650294] range","detail":"{range_begin:; range_end:; }","duration":"7.003839701s","start":"2024-08-18T19:10:33.887048Z","end":"2024-08-18T19:10:40.890888Z","steps":["trace[1509650294] 'agreement among raft nodes before linearized reading'  (duration: 7.003664446s)"],"step_count":1}
	{"level":"error","ts":"2024-08-18T19:10:40.891075Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-18T19:10:41.039575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:41.039722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:42.508620Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-373000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-08-18T19:10:42.840252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:42.840332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:44.381825Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400530,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-18T19:10:44.640271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:44.640365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:44.640391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-08-18T19:10:44.640410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 2939] sent MsgPreVote request to 64535d9f3a4791ce at term 3"}
	{"level":"warn","ts":"2024-08-18T19:10:44.882807Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400530,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:45.385704Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400530,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-18T19:10:45.523889Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:45.523917Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64535d9f3a4791ce","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:10:45.887337Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15583740481722400530,"retry-timeout":"500ms"}
	
	
	==> etcd [a9e532272f1b] <==
	{"level":"warn","ts":"2024-08-18T19:08:52.581006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:49.021104Z","time spent":"3.559898593s","remote":"127.0.0.1:56420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.986891044s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581101Z","caller":"traceutil/trace.go:171","msg":"trace[1676890744] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"6.986905757s","start":"2024-08-18T19:08:45.594192Z","end":"2024-08-18T19:08:52.581098Z","steps":["trace[1676890744] 'agreement among raft nodes before linearized reading'  (duration: 6.986891942s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:45.594168Z","time spent":"6.986940437s","remote":"127.0.0.1:56392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.633749967s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:08:52.581170Z","caller":"traceutil/trace.go:171","msg":"trace[1130682409] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; }","duration":"5.633762365s","start":"2024-08-18T19:08:46.947405Z","end":"2024-08-18T19:08:52.581167Z","steps":["trace[1130682409] 'agreement among raft nodes before linearized reading'  (duration: 5.633750027s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:08:52.581180Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:46.947373Z","time spent":"5.633803888s","remote":"127.0.0.1:56504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true "}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.581225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:08:52.217567Z","time spent":"363.656855ms","remote":"127.0.0.1:56498","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/08/18 19:08:52 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:08:52.608176Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:08:52.608248Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:08:52.608286Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:08:52.608395Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608428Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608446Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608520Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608546Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608595Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.608606Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64535d9f3a4791ce"}
	{"level":"info","ts":"2024-08-18T19:08:52.610214Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610316Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-18T19:08:52.610348Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-373000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> kernel <==
	 19:10:46 up 1 min,  0 users,  load average: 0.16, 0.10, 0.04
	Linux ha-373000 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fbeef7aab770] <==
	I0818 19:08:13.321287       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:13.321527       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:13.321536       1 main.go:299] handling current node
	I0818 19:08:13.321545       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:13.321548       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318236       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:23.318272       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:23.318358       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0818 19:08:23.318384       1 main.go:322] Node ha-373000-m03 has CIDR [10.244.2.0/24] 
	I0818 19:08:23.318431       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:23.318455       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:23.318492       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:23.318516       1 main.go:299] handling current node
	I0818 19:08:33.318121       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:33.318160       1 main.go:299] handling current node
	I0818 19:08:33.318171       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:33.318175       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	I0818 19:08:33.318256       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:33.318261       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314133       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0818 19:08:43.314185       1 main.go:322] Node ha-373000-m04 has CIDR [10.244.3.0/24] 
	I0818 19:08:43.314278       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0818 19:08:43.314360       1 main.go:299] handling current node
	I0818 19:08:43.314444       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0818 19:08:43.314482       1 main.go:322] Node ha-373000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7e27aa53db96] <==
	I0818 19:10:06.959907       1 options.go:228] external host was not specified, using 192.169.0.5
	I0818 19:10:06.961347       1 server.go:142] Version: v1.31.0
	I0818 19:10:06.961387       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:07.546161       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:10:07.549946       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:10:07.552371       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:10:07.552381       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:10:07.552555       1 instance.go:232] Using reconciler: lease
	W0818 19:10:27.545475       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:10:27.545529       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:10:27.554420       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	W0818 19:10:27.554432       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-controller-manager [24788de6a779] <==
	I0818 19:10:05.103965       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:10:05.483625       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:10:05.483663       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:10:05.484840       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:10:05.484954       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:10:05.484863       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:10:05.485038       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:10:27.488487       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": net/http: TLS handshake timeout"
	
	
	==> kube-proxy [530d58000189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:05:43.260298       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:05:43.283054       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	E0818 19:05:43.283201       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:05:43.332462       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:05:43.332509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:05:43.332527       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:05:43.335382       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:05:43.336178       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:05:43.336209       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:43.339664       1 config.go:197] "Starting service config controller"
	I0818 19:05:43.340475       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:05:43.340854       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:05:43.340884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:05:43.342595       1 config.go:326] "Starting node config controller"
	I0818 19:05:43.342621       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:05:43.440978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:05:43.441099       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:05:43.442676       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5bb7217cec87] <==
	E0818 19:10:28.561278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48758->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48800->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.561446       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.561856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48768->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48784->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer
	W0818 19:10:28.562424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.562538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48820->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	E0818 19:10:28.562490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:55098->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48804->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.562973       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48786->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563330       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563535       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48824->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48782->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.563801       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer
	E0818 19:10:28.563955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.5:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:48790->192.169.0.5:8443: read: connection reset by peer" logger="UnhandledError"
	W0818 19:10:28.680351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:28.680700       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:10:29.353980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
	E0818 19:10:29.354184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.5:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [de016fdbd6fe] <==
	I0818 19:04:58.645297       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:05:08.939365       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0818 19:05:08.939390       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:05:08.939395       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:05:17.672661       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:05:17.674961       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:05:17.680297       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:05:17.680709       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:05:17.683175       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:05:17.689784       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:05:17.786103       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:08:52.663744       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0818 19:08:52.664520       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:08:52.664805       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0818 19:08:52.665618       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.792994    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: E0818 19:10:27.793092    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:27 ha-373000 kubelet[1587]: I0818 19:10:27.797222    1587 scope.go:117] "RemoveContainer" containerID="0a3644cb0db7c69e66c1b1d0f3234fba20e233a15e27e5772148d80d14f33f4e"
	Aug 18 19:10:28 ha-373000 kubelet[1587]: E0818 19:10:28.895968    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: I0818 19:10:29.422762    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:29 ha-373000 kubelet[1587]: E0818 19:10:29.423034    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:32 ha-373000 kubelet[1587]: E0818 19:10:32.507755    1587 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-373000.17ece84946cc9aa1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-373000,UID:ha-373000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-373000,},FirstTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,LastTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-373000,}"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.051084    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: E0818 19:10:33.051343    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	Aug 18 19:10:33 ha-373000 kubelet[1587]: I0818 19:10:33.366165    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: I0818 19:10:34.986485    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:34 ha-373000 kubelet[1587]: E0818 19:10:34.987184    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579478    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.579545    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: I0818 19:10:35.861801    1587 scope.go:117] "RemoveContainer" containerID="7e27aa53db964848f64d9282a63325b77e8066d4f7afd7e078ca49d74f625756"
	Aug 18 19:10:35 ha-373000 kubelet[1587]: E0818 19:10:35.861956    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-373000_kube-system(be3d0c0dc33917295e5fed284f29b0d0)\"" pod="kube-system/kube-apiserver-ha-373000" podUID="be3d0c0dc33917295e5fed284f29b0d0"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: W0818 19:10:38.650520    1587 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.650612    1587 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Aug 18 19:10:38 ha-373000 kubelet[1587]: E0818 19:10:38.896386    1587 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-373000\" not found"
	Aug 18 19:10:42 ha-373000 kubelet[1587]: I0818 19:10:42.580270    1587 kubelet_node_status.go:72] "Attempting to register node" node="ha-373000"
	Aug 18 19:10:44 ha-373000 kubelet[1587]: E0818 19:10:44.796749    1587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-373000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Aug 18 19:10:44 ha-373000 kubelet[1587]: E0818 19:10:44.796750    1587 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-373000"
	Aug 18 19:10:44 ha-373000 kubelet[1587]: E0818 19:10:44.796830    1587 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-373000.17ece84946cc9aa1  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-373000,UID:ha-373000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-373000,},FirstTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,LastTimestamp:2024-08-18 19:09:18.794128033 +0000 UTC m=+0.098935686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-373000,}"
	Aug 18 19:10:46 ha-373000 kubelet[1587]: I0818 19:10:46.832498    1587 scope.go:117] "RemoveContainer" containerID="24788de6a779b6b3b57b2e2becb6248a4b461c40237836c9a085858f19bf537e"
	Aug 18 19:10:46 ha-373000 kubelet[1587]: E0818 19:10:46.832638    1587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-373000_kube-system(ed589efba4f3e45b48b475449dc23ab8)\"" pod="kube-system/kube-controller-manager-ha-373000" podUID="ed589efba4f3e45b48b475449dc23ab8"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-373000 -n ha-373000: exit status 2 (152.491266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-373000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (137.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.968300798s)

                                                
                                                
-- stdout --
	* [mount-start-1-273000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-273000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-273000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for a2:8a:8f:3:7d:10
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-273000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:8b:c5:66:32:80
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:8b:c5:66:32:80
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-273000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-273000 -n mount-start-1-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-273000 -n mount-start-1-273000: exit status 7 (78.685346ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:16:46.019597    4370 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:16:46.019616    4370 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-273000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (137.05s)

                                                
                                    
x
+
TestScheduledStopUnix (142.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-565000 --memory=2048 --driver=hyperkit 
E0818 12:31:48.111267    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-565000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.627513864s)

                                                
                                                
-- stdout --
	* [scheduled-stop-565000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-565000" primary control-plane node in "scheduled-stop-565000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-565000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:b4:5d:b8:fa:d9
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-565000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:0:56:41:e2:5f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:0:56:41:e2:5f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-565000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-565000" primary control-plane node in "scheduled-stop-565000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-565000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9a:b4:5d:b8:fa:d9
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-565000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:0:56:41:e2:5f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 72:0:56:41:e2:5f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-18 12:31:52.673214 -0700 PDT m=+3268.113537136
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-565000 -n scheduled-stop-565000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-565000 -n scheduled-stop-565000: exit status 7 (79.107503ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 12:31:52.750499    5467 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 12:31:52.750521    5467 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-565000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-565000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-565000: (5.306438756s)
--- FAIL: TestScheduledStopUnix (142.01s)

                                                
                                    
x
+
TestPause/serial/Start (157.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-532000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0818 13:11:39.121200    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-532000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m37.707624938s)

                                                
                                                
-- stdout --
	* [pause-532000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-532000" primary control-plane node in "pause-532000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-532000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:54:51:76:72:9f
	* Failed to start hyperkit VM. Running "minikube delete -p pause-532000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:b1:7e:2a:c9:29
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 3a:b1:7e:2a:c9:29
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-532000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-532000 -n pause-532000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-532000 -n pause-532000: exit status 7 (80.863705ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 13:13:58.905493    8209 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0818 13:13:58.905516    8209 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-532000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (157.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (7201.808s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E0818 13:36:45.731990    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/old-k8s-version-000000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:36:48.258024    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:36:58.606496    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:37:06.213556    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/old-k8s-version-000000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:37:06.256662    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/enable-default-cni-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:37:17.963335    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/auto-061000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestStartStop (50m49s)
	TestStartStop/group/default-k8s-diff-port (3m53s)
	TestStartStop/group/default-k8s-diff-port/serial (3m53s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (2m12s)

                                                
                                                
goroutine 4421 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000038b60, 0xc0007c1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000a10588, {0xed900e0, 0x2a, 0x2a}, {0xa3cb6c5?, 0xc0f09fd?, 0xedb3760?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007c4820)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007c4820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00056ce00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 158 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a07d00, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2764 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001358000, 0xd7f3240)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2426
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3751 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc001550750, 0xc001550798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x50?, 0xc001550750, 0xc001550798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc001353860?, 0xa43f540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015507d0?, 0xa485844?, 0xc001825a70?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3764
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 161 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 4257 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4256
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 143 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a06fd0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00134cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a07d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00097e000, {0xd801320, 0xc0006ba030}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00097e000, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 158
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4255 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00011bd10, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0007b2580?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00011bd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b15410, {0xd801320, 0xc001335470}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b15410, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4240
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2404 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc00154b750, 0xc00135ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x20?, 0xc00154b750, 0xc00154b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc001b6cb60?, 0xa43f540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00154b7d0?, 0xa485844?, 0xc0015eb020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2417
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 144 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc0007b2750, 0xc0007dcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x2?, 0xc0007b2750, 0xc0007b2798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 158
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 157 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 156
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3527 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000919940, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3764 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b16340, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3746
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3642 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001e2a950, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00142dd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e2a980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00097f820, {0xd801320, 0xc001bce690}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00097f820, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3510 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3509
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3508 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000919910, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0007dad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000919940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e499d0, {0xd801320, 0xc0016ef2f0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e499d0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3412 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b17490, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001330d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b174c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000977000, {0xd801320, 0xc00143db00}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000977000, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3413 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc001a93750, 0xc00132cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x0?, 0xc001a93750, 0xc001a93798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc001b6dd01?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001a937d0?, 0xa485844?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3414 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3413
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4240 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00011bd40, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4251
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3402 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b174c0, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3392
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4355 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017d6600, 0xc001930300)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4336
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3856 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001e2a190, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001331d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e2a1c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001374960, {0xd801320, 0xc00135b680}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001374960, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3114 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000919000, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3109
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3060 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001e2a450, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00142cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e2a480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013742e0, {0xd801320, 0xc001334060}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013742e0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3052
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4035 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b16780, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4014
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4239 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4251
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3643 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc000b6cf50, 0xc000b6cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x80?, 0xc000b6cf50, 0xc000b6cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc001352d00?, 0xa43f540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa4857e5?, 0xc000206c00?, 0xc0015ead80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3652
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3113 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3109
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3644 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3643
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3126 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc000094f50, 0xc0007d6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x10?, 0xc000094f50, 0xc000094f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc0013536c0?, 0xa43f540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000094fd0?, 0xa485844?, 0xc0015d7680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3114
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4223 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001359ba0, {0xc0a3611?, 0x60400000004?}, 0xc00184c380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001359ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001359ba0, 0xc001921c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2767
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2767 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0013584e0, {0xc0979f7?, 0x0?}, 0xc001921c80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013584e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013584e0, 0xc00011b300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2764
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3127 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3126
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3291 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001e2a510, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0012f0d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e2a540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aa8670, {0xd801320, 0xc0019fccc0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aa8670, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2426 [chan receive, 51 minutes]:
testing.(*T).Run(0xc001352b60, {0xc09639d?, 0xa43ec13?}, 0xd7f3240)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001352b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001352b60, 0xd7f30e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2384 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2403 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00011b3d0, 0x1f)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001361d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00011b400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e48000, {0xd801320, 0xc000b7e3c0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e48000, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2417
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 918 [IO wait, 104 minutes]:
internal/poll.runtime_pollWait(0x567d68f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000a09a00?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000a09a00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000a09a00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00069c3a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00069c3a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0002305a0, {0xd81a160, 0xc00069c3a0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0002305a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc000039ba0?, 0xc000039ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 915
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 3509 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc0007adf50, 0xc0007adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x0?, 0xc0007adf50, 0xc0007adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc001b6c601?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc000207601?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1447 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fc600, 0xc001ce02a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1446
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3763 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3746
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3752 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3751
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3293 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3292
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4104 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4100
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1146 [chan receive, 100 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000966f80, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1047
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1136 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000966f50, 0x28)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001432d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000966f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007bb0c0, {0xd801320, 0xc0013e5800}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007bb0c0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2932 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a07dc0, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4105 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e2a7c0, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4100
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3651 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3631
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2949 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2948
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3052 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e2a480, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3050
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3401 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3392
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1564 [select, 98 minutes]:
net/http.(*persistConn).writeLoop(0xc0013eb560)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1580
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1563 [select, 98 minutes]:
net/http.(*persistConn).readLoop(0xc0013eb560)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1580
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3061 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc000b6c750, 0xc000b6c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x6d?, 0xc000b6c750, 0xc000b6c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0x3030312d33323439?, 0x6b696e696d2e2f37?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x552f3a6874615079?, 0x6e656a2f73726573?, 0x6e696d2f736e696b?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3052
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3526 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4022 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc0007b1750, 0xc0007b1798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x60?, 0xc0007b1750, 0xc0007b1798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xa8bc876?, 0xc001790900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa4857e5?, 0xc00143b200?, 0xc000059b60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4035
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4128 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4127
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1154 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1153
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4127 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc0007acf50, 0xc0007acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0xc0?, 0xc0007acf50, 0xc0007acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xa8bc876?, 0xc00143b980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa4857e5?, 0xc0019a5080?, 0xc0023cbec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3292 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc001a95750, 0xc0012eef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x0?, 0xc001a95750, 0xc001a95798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa945b25?, 0xc001475200?, 0xd81dd40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 2948 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc000b70f50, 0xc000b70f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x37?, 0xc000b70f50, 0xc000b70f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0x6b636f535674696b?, 0x5d5b3a7374726f50?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000b70fd0?, 0xa485844?, 0x3a79727473696765?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2932
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3062 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3061
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4353 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5695f050, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001a36d20?, 0xc0013294c9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a36d20, {0xc0013294c9, 0x337, 0x337})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00141c330, {0xc0013294c9?, 0xa4838c7?, 0x20d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00144c5d0, {0xd7ffce8, 0xc0013c83f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd7ffe28, 0xc00144c5d0}, {0xd7ffce8, 0xc0013c83f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xecc2ba0?, {0xd7ffe28, 0xc00144c5d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0xd7ffe28?, 0xc00144c5d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xd7ffe28, 0xc00144c5d0}, {0xd7ffda8, 0xc00141c330}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00184c380?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4336
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1145 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1047
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1153 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc0007aff50, 0xc00142ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x58?, 0xc0007aff50, 0xc0007aff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc0013591e0?, 0xa43f540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0007affd0?, 0xa485844?, 0xc000966640?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3873 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x0?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xc0015eaf00?, 0xc0008c3600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0xa485844?, 0xc001ce0300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3857
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3051 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3050
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3840 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3852
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1307 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019a5800, 0xc001861b60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1306
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 4023 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4022
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3750 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b16310, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00132bd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b16340)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aa8380, {0xd801320, 0xc0014ce570}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aa8380, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3764
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 2947 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000a07d90, 0x15)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0012ebd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a07dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0023be0a0, {0xd801320, 0xc00143c120}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0023be0a0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2932
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3299 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2931 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2930
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1490 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc0013a8780, 0xc001ce15c0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1489
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3874 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3873
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3125 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000918fd0, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0012efd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000919000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001371640, {0xd801320, 0xc0013e52f0}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001371640, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3114
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 1514 [chan send, 98 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ea7800, 0xc001ee73e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1018
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2405 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2404
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4354 [IO wait]:
internal/poll.runtime_pollWait(0x567d6be0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001a36de0?, 0xc001954088?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001a36de0, {0xc001954088, 0x1df78, 0x1df78})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00141c380, {0xc001954088?, 0x56864c88?, 0x1fe2b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00144c6f0, {0xd7ffce8, 0xc0013c8400})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xd7ffe28, 0xc00144c6f0}, {0xd7ffce8, 0xc0013c8400}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007afe78?, {0xd7ffe28, 0xc00144c6f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007aff38?, {0xd7ffe28?, 0xc00144c6f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xd7ffe28, 0xc00144c6f0}, {0xd7ffda8, 0xc00141c380}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001930de0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4336
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2417 [chan receive, 64 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00011b400, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2386
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3857 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e2a1c0, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3852
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3300 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e2a540, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3287
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4336 [syscall, 2 minutes]:
syscall.syscall6(0xc00144df80?, 0x1000000000010?, 0x10000000019?, 0x565d0e88?, 0x90?, 0xf84c5b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001404b48?, 0xa30c0c5?, 0x90?, 0xd75c9c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xa43c885?, 0xc001404b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0017c04b0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017d6600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0017d6600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0018261a0, 0xc0017d6600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0xd8274d0, 0xc0003c9b90}, 0xc0018261a0, {0xc001881080, 0x1c}, {0x336ed30800b71758?, 0xc000b71760?}, {0xa43ec13?, 0xa396c6f?}, {0xc00158ac00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0018261a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0018261a0, 0xc00184c380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4223
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3652 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e2a980, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3631
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4034 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xd81dd40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4014
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4021 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b16750, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00132ad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b16780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e48070, {0xd801320, 0xc0014cf530}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e48070, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4035
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4126 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001e2a790, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00142bd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xd8416e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e2a7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006c44e0, {0xd801320, 0xc0023c4210}, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006c44e0, 0x3b9aca00, 0x0, 0x1, 0xc0000582a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4256 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xd827690, 0xc0000582a0}, 0xc00154c750, 0xc00154c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xd827690, 0xc0000582a0}, 0x0?, 0xc00154c750, 0xc00154c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xd827690?, 0xc0000582a0?}, 0xa8bc801?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xa485801?, 0xc0000582a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4240
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                    

Test pass (239/276)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 9.92
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.29
18 TestDownloadOnly/v1.31.0/DeleteAll 0.24
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.95
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 222.2
31 TestAddons/serial/GCPAuth/Namespaces 0.11
33 TestAddons/parallel/Registry 13.75
34 TestAddons/parallel/Ingress 19.17
35 TestAddons/parallel/InspektorGadget 10.64
36 TestAddons/parallel/MetricsServer 5.6
37 TestAddons/parallel/HelmTiller 10.2
39 TestAddons/parallel/CSI 54.31
40 TestAddons/parallel/Headlamp 19.39
41 TestAddons/parallel/CloudSpanner 5.42
42 TestAddons/parallel/LocalPath 52.49
43 TestAddons/parallel/NvidiaDevicePlugin 5.44
44 TestAddons/parallel/Yakd 11.55
45 TestAddons/StoppedEnableDisable 5.93
53 TestHyperKitDriverInstallOrUpdate 8.46
57 TestErrorSpam/start 1.33
58 TestErrorSpam/status 0.45
59 TestErrorSpam/pause 5.26
60 TestErrorSpam/unpause 173.74
61 TestErrorSpam/stop 155.84
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.31
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 62.08
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
73 TestFunctional/serial/CacheCmd/cache/add_local 1.33
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.04
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.2
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.57
81 TestFunctional/serial/ExtraConfig 63.6
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.61
84 TestFunctional/serial/LogsFileCmd 2.73
85 TestFunctional/serial/InvalidService 5.2
87 TestFunctional/parallel/ConfigCmd 0.5
88 TestFunctional/parallel/DashboardCmd 11.11
89 TestFunctional/parallel/DryRun 1.39
90 TestFunctional/parallel/InternationalLanguage 1.07
91 TestFunctional/parallel/StatusCmd 0.53
95 TestFunctional/parallel/ServiceCmdConnect 8.7
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 27.16
99 TestFunctional/parallel/SSHCmd 0.34
100 TestFunctional/parallel/CpCmd 1.04
101 TestFunctional/parallel/MySQL 26.55
102 TestFunctional/parallel/FileSync 0.2
103 TestFunctional/parallel/CertSync 1.04
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.17
111 TestFunctional/parallel/License 0.62
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.56
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.19
119 TestFunctional/parallel/ImageCommands/Setup 1.83
120 TestFunctional/parallel/DockerEnv/bash 0.61
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.64
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.43
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
131 TestFunctional/parallel/ServiceCmd/DeployApp 20.27
132 TestFunctional/parallel/ServiceCmd/List 0.18
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.22
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
135 TestFunctional/parallel/ServiceCmd/Format 0.28
136 TestFunctional/parallel/ServiceCmd/URL 0.24
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.25
149 TestFunctional/parallel/ProfileCmd/profile_list 0.31
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
151 TestFunctional/parallel/MountCmd/any-port 4.86
152 TestFunctional/parallel/MountCmd/specific-port 1.92
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.75
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 201.11
161 TestMultiControlPlane/serial/DeployApp 4.9
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 49.87
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
166 TestMultiControlPlane/serial/CopyFile 9.34
167 TestMultiControlPlane/serial/StopSecondaryNode 8.7
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
169 TestMultiControlPlane/serial/RestartSecondaryNode 38.26
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.34
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
174 TestMultiControlPlane/serial/StopCluster 24.98
181 TestImageBuild/serial/Setup 37.54
182 TestImageBuild/serial/NormalBuild 1.66
183 TestImageBuild/serial/BuildWithBuildArg 0.83
184 TestImageBuild/serial/BuildWithDockerIgnore 0.62
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.63
189 TestJSONOutput/start/Command 47.86
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.47
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.44
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.32
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.57
217 TestMainNoArgs 0.08
218 TestMinikubeProfile 90.36
224 TestMultiNode/serial/FreshStart2Nodes 106.94
225 TestMultiNode/serial/DeployApp2Nodes 4.22
226 TestMultiNode/serial/PingHostFrom2Pods 0.89
227 TestMultiNode/serial/AddNode 45.63
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.18
230 TestMultiNode/serial/CopyFile 5.4
231 TestMultiNode/serial/StopNode 2.84
232 TestMultiNode/serial/StartAfterStop 36.67
233 TestMultiNode/serial/RestartKeepsNodes 204.38
234 TestMultiNode/serial/DeleteNode 3.26
235 TestMultiNode/serial/StopMultiNode 16.8
236 TestMultiNode/serial/RestartMultiNode 130.31
237 TestMultiNode/serial/ValidateNameConflict 44.25
241 TestPreload 152.1
244 TestSkaffold 114.11
247 TestRunningBinaryUpgrade 88.84
249 TestKubernetesUpgrade 124.04
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.08
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.91
271 TestStoppedBinaryUpgrade/Setup 2.12
272 TestStoppedBinaryUpgrade/Upgrade 1301.66
273 TestStoppedBinaryUpgrade/MinikubeLogs 3.15
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.62
276 TestNoKubernetes/serial/StartWithK8s 39.64
277 TestNoKubernetes/serial/StartWithStopK8s 8.74
278 TestNoKubernetes/serial/Start 19.97
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
282 TestNoKubernetes/serial/ProfileList 0.38
283 TestNoKubernetes/serial/Stop 2.37
284 TestNoKubernetes/serial/StartNoArgs 75.78
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
286 TestNetworkPlugins/group/auto/Start 255.98
287 TestNetworkPlugins/group/kindnet/Start 72.1
288 TestNetworkPlugins/group/kindnet/ControllerPod 6
289 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
290 TestNetworkPlugins/group/kindnet/NetCatPod 12.15
291 TestNetworkPlugins/group/kindnet/DNS 0.14
292 TestNetworkPlugins/group/kindnet/Localhost 0.1
293 TestNetworkPlugins/group/kindnet/HairPin 0.1
294 TestNetworkPlugins/group/calico/Start 66.33
295 TestNetworkPlugins/group/calico/ControllerPod 6
296 TestNetworkPlugins/group/calico/KubeletFlags 0.16
297 TestNetworkPlugins/group/calico/NetCatPod 12.13
298 TestNetworkPlugins/group/calico/DNS 0.19
299 TestNetworkPlugins/group/calico/Localhost 0.1
300 TestNetworkPlugins/group/calico/HairPin 0.1
301 TestNetworkPlugins/group/auto/KubeletFlags 0.16
302 TestNetworkPlugins/group/auto/NetCatPod 10.14
303 TestNetworkPlugins/group/auto/DNS 0.13
304 TestNetworkPlugins/group/auto/Localhost 0.1
305 TestNetworkPlugins/group/auto/HairPin 0.13
306 TestNetworkPlugins/group/custom-flannel/Start 53.62
307 TestNetworkPlugins/group/false/Start 83.59
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.17
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.13
310 TestNetworkPlugins/group/custom-flannel/DNS 0.12
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
313 TestNetworkPlugins/group/enable-default-cni/Start 188.39
314 TestNetworkPlugins/group/false/KubeletFlags 0.15
315 TestNetworkPlugins/group/false/NetCatPod 12.15
316 TestNetworkPlugins/group/false/DNS 0.13
317 TestNetworkPlugins/group/false/Localhost 0.1
318 TestNetworkPlugins/group/false/HairPin 0.1
319 TestNetworkPlugins/group/flannel/Start 51.73
320 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
322 TestNetworkPlugins/group/flannel/NetCatPod 11.13
323 TestNetworkPlugins/group/flannel/DNS 0.13
324 TestNetworkPlugins/group/flannel/Localhost 0.1
325 TestNetworkPlugins/group/flannel/HairPin 0.1
326 TestNetworkPlugins/group/bridge/Start 166.15
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.14
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
332 TestNetworkPlugins/group/kubenet/Start 51.64
333 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
334 TestNetworkPlugins/group/kubenet/NetCatPod 11.13
335 TestNetworkPlugins/group/kubenet/DNS 0.13
336 TestNetworkPlugins/group/kubenet/Localhost 0.12
337 TestNetworkPlugins/group/kubenet/HairPin 0.1
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
339 TestNetworkPlugins/group/bridge/NetCatPod 12.13
342 TestNetworkPlugins/group/bridge/DNS 20.84
343 TestNetworkPlugins/group/bridge/Localhost 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (28.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-948000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-948000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (28.316088055s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-948000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-948000: exit status 85 (297.499448ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |          |
	|         | -p download-only-948000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:37:24
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:37:24.674453    1529 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:37:24.674746    1529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:24.674751    1529 out.go:358] Setting ErrFile to fd 2...
	I0818 11:37:24.674755    1529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:24.674926    1529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	W0818 11:37:24.675039    1529 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19423-1007/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19423-1007/.minikube/config/config.json: no such file or directory
	I0818 11:37:24.676815    1529 out.go:352] Setting JSON to true
	I0818 11:37:24.701978    1529 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":415,"bootTime":1724005829,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 11:37:24.702080    1529 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:37:24.725883    1529 out.go:97] [download-only-948000] minikube v1.33.1 on Darwin 14.6.1
	I0818 11:37:24.726110    1529 notify.go:220] Checking for updates...
	W0818 11:37:24.726150    1529 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 11:37:24.747345    1529 out.go:169] MINIKUBE_LOCATION=19423
	I0818 11:37:24.768730    1529 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:37:24.790854    1529 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 11:37:24.812549    1529 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:37:24.833733    1529 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	W0818 11:37:24.883548    1529 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 11:37:24.884008    1529 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:37:24.935254    1529 out.go:97] Using the hyperkit driver based on user configuration
	I0818 11:37:24.935317    1529 start.go:297] selected driver: hyperkit
	I0818 11:37:24.935334    1529 start.go:901] validating driver "hyperkit" against <nil>
	I0818 11:37:24.935562    1529 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:24.935959    1529 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 11:37:25.333648    1529 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 11:37:25.338712    1529 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:37:25.338734    1529 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 11:37:25.338765    1529 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:37:25.343556    1529 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0818 11:37:25.344023    1529 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 11:37:25.344079    1529 cni.go:84] Creating CNI manager for ""
	I0818 11:37:25.344100    1529 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 11:37:25.344178    1529 start.go:340] cluster config:
	{Name:download-only-948000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-948000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:37:25.344426    1529 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:25.365335    1529 out.go:97] Downloading VM boot image ...
	I0818 11:37:25.365437    1529 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 11:37:35.489847    1529 out.go:97] Starting "download-only-948000" primary control-plane node in "download-only-948000" cluster
	I0818 11:37:35.489883    1529 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:35.542710    1529 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0818 11:37:35.542734    1529 cache.go:56] Caching tarball of preloaded images
	I0818 11:37:35.542934    1529 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:35.562565    1529 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 11:37:35.562581    1529 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 11:37:35.643598    1529 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0818 11:37:48.991576    1529 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 11:37:48.991771    1529 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 11:37:49.540838    1529 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 11:37:49.541078    1529 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/download-only-948000/config.json ...
	I0818 11:37:49.541101    1529 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/download-only-948000/config.json: {Name:mkc06deda4c9fcb8e53f47a5e7204dcd8da67d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 11:37:49.542467    1529 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 11:37:49.542779    1529 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-948000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-948000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-948000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (9.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-325000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-325000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (9.91589042s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (9.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-325000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-325000: exit status 85 (291.445177ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-948000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| delete  | -p download-only-948000        | download-only-948000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT | 18 Aug 24 11:37 PDT |
	| start   | -o=json --download-only        | download-only-325000 | jenkins | v1.33.1 | 18 Aug 24 11:37 PDT |                     |
	|         | -p download-only-325000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 11:37:53
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 11:37:53.737010    1560 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:37:53.737200    1560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:53.737206    1560 out.go:358] Setting ErrFile to fd 2...
	I0818 11:37:53.737210    1560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:37:53.737379    1560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 11:37:53.738840    1560 out.go:352] Setting JSON to true
	I0818 11:37:53.762835    1560 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":444,"bootTime":1724005829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 11:37:53.762937    1560 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:37:53.784166    1560 out.go:97] [download-only-325000] minikube v1.33.1 on Darwin 14.6.1
	I0818 11:37:53.784324    1560 notify.go:220] Checking for updates...
	I0818 11:37:53.806118    1560 out.go:169] MINIKUBE_LOCATION=19423
	I0818 11:37:53.827117    1560 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:37:53.848074    1560 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 11:37:53.869146    1560 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:37:53.890310    1560 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	W0818 11:37:53.932121    1560 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 11:37:53.932555    1560 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:37:53.963242    1560 out.go:97] Using the hyperkit driver based on user configuration
	I0818 11:37:53.963286    1560 start.go:297] selected driver: hyperkit
	I0818 11:37:53.963298    1560 start.go:901] validating driver "hyperkit" against <nil>
	I0818 11:37:53.963465    1560 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:53.963647    1560 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19423-1007/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0818 11:37:53.973389    1560 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0818 11:37:53.977980    1560 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:37:53.978002    1560 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0818 11:37:53.978031    1560 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 11:37:53.981084    1560 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0818 11:37:53.981226    1560 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 11:37:53.981255    1560 cni.go:84] Creating CNI manager for ""
	I0818 11:37:53.981269    1560 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 11:37:53.981276    1560 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 11:37:53.981350    1560 start.go:340] cluster config:
	{Name:download-only-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:37:53.981448    1560 iso.go:125] acquiring lock: {Name:mkfcc70059a4397f76252ba67d02448d3569c468 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 11:37:54.001994    1560 out.go:97] Starting "download-only-325000" primary control-plane node in "download-only-325000" cluster
	I0818 11:37:54.002049    1560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:37:54.061969    1560 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 11:37:54.061988    1560 cache.go:56] Caching tarball of preloaded images
	I0818 11:37:54.062282    1560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 11:37:54.083194    1560 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0818 11:37:54.083219    1560 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 11:37:54.170088    1560 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19423-1007/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-325000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-325000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-325000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.95s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-146000 --alsologtostderr --binary-mirror http://127.0.0.1:49539 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-146000
--- PASS: TestBinaryMirror (0.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-103000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-103000: exit status 85 (187.037504ms)

                                                
                                                
-- stdout --
	* Profile "addons-103000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-103000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-103000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-103000: exit status 85 (207.801748ms)

                                                
                                                
-- stdout --
	* Profile "addons-103000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-103000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (222.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-103000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-103000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m42.204270744s)
--- PASS: TestAddons/Setup (222.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-103000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-103000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.842684ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-mg7mv" [9759a33e-5cfb-46e6-af39-b7afdc6ae4ed] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003993987s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8jjw4" [b82f8fbe-a074-466b-9e7f-b0a4ce9248ae] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003799091s
addons_test.go:342: (dbg) Run:  kubectl --context addons-103000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-103000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-103000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.132461626s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 ip
2024/08/18 11:45:39 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-103000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-103000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-103000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3cfcc982-627a-46aa-b1fd-e9b83b4f2590] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3cfcc982-627a-46aa-b1fd-e9b83b4f2590] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003807272s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-103000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 addons disable ingress --alsologtostderr -v=1: (7.463968013s)
--- PASS: TestAddons/parallel/Ingress (19.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bc29t" [9bc5548d-277f-47c6-9926-18782bf727cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006207816s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-103000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-103000: (5.632252753s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.676378ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-w86gd" [c0e788e3-9dc8-424b-bfef-4756401a662e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003475854s
addons_test.go:417: (dbg) Run:  kubectl --context addons-103000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.85697ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-swqh8" [207e865c-151f-454b-8fca-434a9300d93a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004837268s
addons_test.go:475: (dbg) Run:  kubectl --context addons-103000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-103000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.763192543s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.011134ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-103000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-103000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fba448bc-4916-48c3-b01f-fadf9d7aa568] Pending
helpers_test.go:344: "task-pv-pod" [fba448bc-4916-48c3-b01f-fadf9d7aa568] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fba448bc-4916-48c3-b01f-fadf9d7aa568] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005846819s
addons_test.go:590: (dbg) Run:  kubectl --context addons-103000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-103000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-103000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-103000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-103000 delete pod task-pv-pod: (1.120677003s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-103000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-103000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-103000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2fd18680-b84c-4450-a0f0-4d5793954ce1] Pending
helpers_test.go:344: "task-pv-pod-restore" [2fd18680-b84c-4450-a0f0-4d5793954ce1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2fd18680-b84c-4450-a0f0-4d5793954ce1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004198125s
addons_test.go:632: (dbg) Run:  kubectl --context addons-103000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-103000 delete pod task-pv-pod-restore: (1.08842511s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-103000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-103000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.405340135s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-103000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-q7phx" [02cf979b-6a4b-4113-8eb6-1085dad7c00e] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-q7phx" [02cf979b-6a4b-4113-8eb6-1085dad7c00e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-q7phx" [02cf979b-6a4b-4113-8eb6-1085dad7c00e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005579519s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 addons disable headlamp --alsologtostderr -v=1: (5.467153209s)
--- PASS: TestAddons/parallel/Headlamp (19.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-tntd2" [7aa109f5-83ac-4d3b-b458-7537f8c013ae] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003535077s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-103000
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-103000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-103000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1753fb94-445d-4d1e-ab39-930e178b392f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1753fb94-445d-4d1e-ab39-930e178b392f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1753fb94-445d-4d1e-ab39-930e178b392f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004498009s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-103000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 ssh "cat /opt/local-path-provisioner/pvc-046a8640-c56e-4cb0-aa9a-15fc128421e0_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-103000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-103000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.847336345s)
--- PASS: TestAddons/parallel/LocalPath (52.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gkn4q" [b7ad2759-6346-4725-ba80-628779b23d51] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002635204s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-103000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-24wj5" [8672ae7d-b9ac-46b2-8928-3dbcc93545b5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002109263s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-103000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-103000 addons disable yakd --alsologtostderr -v=1: (5.550775017s)
--- PASS: TestAddons/parallel/Yakd (11.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-103000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-103000: (5.39393222s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-103000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-103000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-103000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.46s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.46s)

                                                
                                    
x
+
TestErrorSpam/start (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 start --dry-run
--- PASS: TestErrorSpam/start (1.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.45s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status: exit status 6 (151.128075ms)

                                                
                                                
-- stdout --
	nospam-719000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 11:48:43.400741    2062 status.go:417] kubeconfig endpoint: get endpoint: "nospam-719000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status: exit status 6 (150.225519ms)

                                                
                                                
-- stdout --
	nospam-719000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 11:48:43.550434    2067 status.go:417] kubeconfig endpoint: get endpoint: "nospam-719000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status: exit status 6 (149.204626ms)

                                                
                                                
-- stdout --
	nospam-719000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 11:48:43.700139    2072 status.go:417] kubeconfig endpoint: get endpoint: "nospam-719000" does not appear in /Users/jenkins/minikube-integration/19423-1007/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.45s)

                                                
                                    
x
+
TestErrorSpam/pause (5.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause: exit status 80 (1.244425319s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause: exit status 80 (2.250599378s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause: exit status 80 (1.765511097s)

                                                
                                                
-- stdout --
	* Pausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (173.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause: exit status 80 (53.262024875s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause: exit status 80 (1m0.237014912s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause: exit status 80 (1m0.235768394s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-719000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (173.74s)

                                                
                                    
x
+
TestErrorSpam/stop (155.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop: (5.404052903s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop
E0818 11:51:48.161265    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.169015    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.180379    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.202440    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.245650    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.327097    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.490615    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:48.814244    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:49.457173    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:50.739696    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:53.302019    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:51:58.423996    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:52:08.667115    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:52:29.149516    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop: (1m15.206387748s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop
E0818 11:53:10.112898    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-719000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-719000 stop: (1m15.224521068s)
--- PASS: TestErrorSpam/stop (155.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19423-1007/.minikube/files/etc/test/nested/copy/1526/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0818 11:54:32.035999    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-843000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (50.306062616s)
--- PASS: TestFunctional/serial/StartWithProxy (50.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-843000 --alsologtostderr -v=8: (1m2.078153466s)
functional_test.go:663: soft start took 1m2.078581835s for "functional-843000" cluster.
--- PASS: TestFunctional/serial/SoftStart (62.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-843000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 cache add registry.k8s.io/pause:3.1: (1.165237649s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 cache add registry.k8s.io/pause:3.3: (1.031002214s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local825339645/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache add minikube-local-cache-test:functional-843000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache delete minikube-local-cache-test:functional-843000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-843000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (143.826484ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 kubectl -- --context functional-843000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 kubectl -- --context functional-843000 get pods: (1.20065487s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-843000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-843000 get pods: (1.573736773s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (63.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0818 11:56:48.160013    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 11:57:15.877768    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-843000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m3.59656247s)
functional_test.go:761: restart took 1m3.596714487s for "functional-843000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (63.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-843000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 logs: (2.611414203s)
--- PASS: TestFunctional/serial/LogsCmd (2.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1336080823/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1336080823/001/logs.txt: (2.724022553s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-843000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-843000: exit status 115 (265.801361ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31313 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-843000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-843000 delete -f testdata/invalidsvc.yaml: (1.806426154s)
--- PASS: TestFunctional/serial/InvalidService (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 config get cpus: exit status 14 (66.894744ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 config get cpus: exit status 14 (56.258192ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-843000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-843000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2910: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (673.684006ms)

                                                
                                                
-- stdout --
	* [functional-843000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:58:31.504437    2823 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:58:31.504754    2823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:58:31.504760    2823 out.go:358] Setting ErrFile to fd 2...
	I0818 11:58:31.504764    2823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:58:31.504973    2823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 11:58:31.545031    2823 out.go:352] Setting JSON to false
	I0818 11:58:31.569318    2823 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1682,"bootTime":1724005829,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 11:58:31.569405    2823 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:58:31.628014    2823 out.go:177] * [functional-843000] minikube v1.33.1 on Darwin 14.6.1
	I0818 11:58:31.649274    2823 notify.go:220] Checking for updates...
	I0818 11:58:31.669985    2823 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:58:31.712146    2823 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:58:31.754191    2823 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 11:58:31.796063    2823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:58:31.838328    2823 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 11:58:31.880239    2823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:58:31.922235    2823 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:58:31.922571    2823 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:58:31.922617    2823 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:58:31.931933    2823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50738
	I0818 11:58:31.932329    2823 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:58:31.932768    2823 main.go:141] libmachine: Using API Version  1
	I0818 11:58:31.932779    2823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:58:31.932997    2823 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:58:31.933117    2823 main.go:141] libmachine: (functional-843000) Calling .DriverName
	I0818 11:58:31.933300    2823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:58:31.933554    2823 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:58:31.933580    2823 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:58:31.942689    2823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50740
	I0818 11:58:31.943069    2823 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:58:31.943397    2823 main.go:141] libmachine: Using API Version  1
	I0818 11:58:31.943421    2823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:58:31.943685    2823 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:58:31.943796    2823 main.go:141] libmachine: (functional-843000) Calling .DriverName
	I0818 11:58:31.973036    2823 out.go:177] * Using the hyperkit driver based on existing profile
	I0818 11:58:31.994020    2823 start.go:297] selected driver: hyperkit
	I0818 11:58:31.994045    2823 start.go:901] validating driver "hyperkit" against &{Name:functional-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:58:31.994253    2823 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:58:32.018929    2823 out.go:201] 
	W0818 11:58:32.039958    2823 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0818 11:58:32.060919    2823 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-843000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (1.064819427s)

                                                
                                                
-- stdout --
	* [functional-843000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 11:58:32.865764    2856 out.go:345] Setting OutFile to fd 1 ...
	I0818 11:58:32.866019    2856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:58:32.866024    2856 out.go:358] Setting ErrFile to fd 2...
	I0818 11:58:32.866028    2856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 11:58:32.866210    2856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 11:58:32.867778    2856 out.go:352] Setting JSON to false
	I0818 11:58:32.892326    2856 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1683,"bootTime":1724005829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0818 11:58:32.892437    2856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0818 11:58:32.913643    2856 out.go:177] * [functional-843000] minikube v1.33.1 sur Darwin 14.6.1
	I0818 11:58:32.954811    2856 notify.go:220] Checking for updates...
	I0818 11:58:32.976545    2856 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 11:58:33.040360    2856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	I0818 11:58:33.141587    2856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0818 11:58:33.246457    2856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 11:58:33.330432    2856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	I0818 11:58:33.430305    2856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 11:58:33.526233    2856 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 11:58:33.526984    2856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:58:33.527057    2856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:58:33.536720    2856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50784
	I0818 11:58:33.537101    2856 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:58:33.537542    2856 main.go:141] libmachine: Using API Version  1
	I0818 11:58:33.537576    2856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:58:33.537844    2856 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:58:33.537967    2856 main.go:141] libmachine: (functional-843000) Calling .DriverName
	I0818 11:58:33.538151    2856 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 11:58:33.538414    2856 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 11:58:33.538438    2856 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 11:58:33.546946    2856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50786
	I0818 11:58:33.547302    2856 main.go:141] libmachine: () Calling .GetVersion
	I0818 11:58:33.547615    2856 main.go:141] libmachine: Using API Version  1
	I0818 11:58:33.547632    2856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 11:58:33.547848    2856 main.go:141] libmachine: () Calling .GetMachineName
	I0818 11:58:33.547962    2856 main.go:141] libmachine: (functional-843000) Calling .DriverName
	I0818 11:58:33.630404    2856 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0818 11:58:33.672556    2856 start.go:297] selected driver: hyperkit
	I0818 11:58:33.672581    2856 start.go:901] validating driver "hyperkit" against &{Name:functional-843000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:functional-843000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 11:58:33.672768    2856 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 11:58:33.756616    2856 out.go:201] 
	W0818 11:58:33.798375    2856 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 11:58:33.819376    2856 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-843000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-843000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xnsdz" [709bb195-4fa8-4938-92d4-2af0475af2cc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-xnsdz" [709bb195-4fa8-4938-92d4-2af0475af2cc] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004053935s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.4:31069
functional_test.go:1675: http://192.169.0.4:31069: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-xnsdz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31069
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0bc74911-f07d-4bd9-ab40-85c56203ce3f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005299959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-843000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-843000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8d6ce6c4-235c-4a85-87ca-840b9bb2f114] Pending
helpers_test.go:344: "sp-pod" [8d6ce6c4-235c-4a85-87ca-840b9bb2f114] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8d6ce6c4-235c-4a85-87ca-840b9bb2f114] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004192579s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-843000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-843000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [056cf646-5386-49fe-a1c2-e9fe3aba0cd6] Pending
helpers_test.go:344: "sp-pod" [056cf646-5386-49fe-a1c2-e9fe3aba0cd6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [056cf646-5386-49fe-a1c2-e9fe3aba0cd6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004519018s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-843000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh -n functional-843000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cp functional-843000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd4192693036/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh -n functional-843000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh -n functional-843000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-843000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-txxdb" [9370bd58-b558-484a-abf3-b93b445ab777] Pending
helpers_test.go:344: "mysql-6cdb49bbb-txxdb" [9370bd58-b558-484a-abf3-b93b445ab777] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-txxdb" [9370bd58-b558-484a-abf3-b93b445ab777] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.009631172s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;": exit status 1 (181.139336ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;": exit status 1 (135.012616ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;": exit status 1 (107.62303ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-843000 exec mysql-6cdb49bbb-txxdb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1526/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /etc/test/nested/copy/1526/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1526.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/1526.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1526.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /usr/share/ca-certificates/1526.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/15262.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15262.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /usr/share/ca-certificates/15262.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-843000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh "sudo systemctl is-active crio": exit status 1 (165.625668ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-843000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-843000
docker.io/kicbase/echo-server:functional-843000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-843000 image ls --format short --alsologtostderr:
I0818 11:58:35.961988    2923 out.go:345] Setting OutFile to fd 1 ...
I0818 11:58:35.962821    2923 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:35.962830    2923 out.go:358] Setting ErrFile to fd 2...
I0818 11:58:35.962837    2923 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:35.963333    2923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
I0818 11:58:35.963921    2923 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:35.964010    2923 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:35.964340    2923 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:35.964379    2923 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:35.972781    2923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50862
I0818 11:58:35.973194    2923 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:35.973593    2923 main.go:141] libmachine: Using API Version  1
I0818 11:58:35.973616    2923 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:35.973853    2923 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:35.973966    2923 main.go:141] libmachine: (functional-843000) Calling .GetState
I0818 11:58:35.974052    2923 main.go:141] libmachine: (functional-843000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0818 11:58:35.974120    2923 main.go:141] libmachine: (functional-843000) DBG | hyperkit pid from json: 2203
I0818 11:58:35.975373    2923 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:35.975402    2923 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:35.983748    2923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50864
I0818 11:58:35.984174    2923 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:35.984520    2923 main.go:141] libmachine: Using API Version  1
I0818 11:58:35.984534    2923 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:35.984780    2923 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:35.984901    2923 main.go:141] libmachine: (functional-843000) Calling .DriverName
I0818 11:58:35.985065    2923 ssh_runner.go:195] Run: systemctl --version
I0818 11:58:35.985084    2923 main.go:141] libmachine: (functional-843000) Calling .GetSSHHostname
I0818 11:58:35.985163    2923 main.go:141] libmachine: (functional-843000) Calling .GetSSHPort
I0818 11:58:35.985272    2923 main.go:141] libmachine: (functional-843000) Calling .GetSSHKeyPath
I0818 11:58:35.985344    2923 main.go:141] libmachine: (functional-843000) Calling .GetSSHUsername
I0818 11:58:35.985426    2923 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/functional-843000/id_rsa Username:docker}
I0818 11:58:36.014853    2923 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 11:58:36.034642    2923 main.go:141] libmachine: Making call to close driver server
I0818 11:58:36.034651    2923 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:36.034795    2923 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:36.034806    2923 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:36.034812    2923 main.go:141] libmachine: Making call to close driver server
I0818 11:58:36.034817    2923 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:36.034840    2923 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:36.034960    2923 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:36.034969    2923 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:36.034981    2923 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-843000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-843000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-843000 | 371c37bb61ff1 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-843000 | 34ae51216c237 | 30B    |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-843000 image ls --format table --alsologtostderr:
I0818 11:58:38.606498    2948 out.go:345] Setting OutFile to fd 1 ...
I0818 11:58:38.606769    2948 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:38.606774    2948 out.go:358] Setting ErrFile to fd 2...
I0818 11:58:38.606778    2948 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:38.606975    2948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
I0818 11:58:38.607568    2948 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:38.607658    2948 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:38.608013    2948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:38.608056    2948 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:38.616359    2948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50898
I0818 11:58:38.616798    2948 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:38.617225    2948 main.go:141] libmachine: Using API Version  1
I0818 11:58:38.617256    2948 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:38.617539    2948 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:38.617723    2948 main.go:141] libmachine: (functional-843000) Calling .GetState
I0818 11:58:38.617822    2948 main.go:141] libmachine: (functional-843000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0818 11:58:38.617886    2948 main.go:141] libmachine: (functional-843000) DBG | hyperkit pid from json: 2203
I0818 11:58:38.619214    2948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:38.619239    2948 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:38.627970    2948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50900
I0818 11:58:38.628323    2948 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:38.628678    2948 main.go:141] libmachine: Using API Version  1
I0818 11:58:38.628690    2948 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:38.628952    2948 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:38.629092    2948 main.go:141] libmachine: (functional-843000) Calling .DriverName
I0818 11:58:38.629247    2948 ssh_runner.go:195] Run: systemctl --version
I0818 11:58:38.629265    2948 main.go:141] libmachine: (functional-843000) Calling .GetSSHHostname
I0818 11:58:38.629347    2948 main.go:141] libmachine: (functional-843000) Calling .GetSSHPort
I0818 11:58:38.629418    2948 main.go:141] libmachine: (functional-843000) Calling .GetSSHKeyPath
I0818 11:58:38.629515    2948 main.go:141] libmachine: (functional-843000) Calling .GetSSHUsername
I0818 11:58:38.629603    2948 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/functional-843000/id_rsa Username:docker}
I0818 11:58:38.659951    2948 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 11:58:38.678402    2948 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.678412    2948 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.678572    2948 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.678583    2948 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:38.678590    2948 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.678596    2948 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.678754    2948 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:38.678786    2948 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.678795    2948 main.go:141] libmachine: Making call to close connection to plugin binary
2024/08/18 11:58:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-843000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"371c37bb61ff10bdcdcca32437b54fba1b2220d5f6db14033a9714b0110be0a3","repoDigests":[],"repoTags":["localhost/my-image:functional-843000"],"size":"1240000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"34ae51216c2372afb4b95ec9f3e36ba7e450accb7a586200cb9e1e8461a896ea","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-843000"],"size":"30"},{"id":"045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-843000"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","re
poDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"}
,{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-843000 image ls --format json --alsologtostderr:
I0818 11:58:38.455422    2944 out.go:345] Setting OutFile to fd 1 ...
I0818 11:58:38.455697    2944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:38.455702    2944 out.go:358] Setting ErrFile to fd 2...
I0818 11:58:38.455706    2944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:38.455870    2944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
I0818 11:58:38.456440    2944 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:38.456532    2944 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:38.456854    2944 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:38.456900    2944 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:38.465183    2944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50893
I0818 11:58:38.465615    2944 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:38.466035    2944 main.go:141] libmachine: Using API Version  1
I0818 11:58:38.466045    2944 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:38.466271    2944 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:38.466393    2944 main.go:141] libmachine: (functional-843000) Calling .GetState
I0818 11:58:38.466475    2944 main.go:141] libmachine: (functional-843000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0818 11:58:38.466538    2944 main.go:141] libmachine: (functional-843000) DBG | hyperkit pid from json: 2203
I0818 11:58:38.467805    2944 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:38.467828    2944 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:38.476145    2944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50895
I0818 11:58:38.476476    2944 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:38.476828    2944 main.go:141] libmachine: Using API Version  1
I0818 11:58:38.476843    2944 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:38.477042    2944 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:38.477152    2944 main.go:141] libmachine: (functional-843000) Calling .DriverName
I0818 11:58:38.477310    2944 ssh_runner.go:195] Run: systemctl --version
I0818 11:58:38.477329    2944 main.go:141] libmachine: (functional-843000) Calling .GetSSHHostname
I0818 11:58:38.477401    2944 main.go:141] libmachine: (functional-843000) Calling .GetSSHPort
I0818 11:58:38.477492    2944 main.go:141] libmachine: (functional-843000) Calling .GetSSHKeyPath
I0818 11:58:38.477644    2944 main.go:141] libmachine: (functional-843000) Calling .GetSSHUsername
I0818 11:58:38.477724    2944 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/functional-843000/id_rsa Username:docker}
I0818 11:58:38.509345    2944 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 11:58:38.525693    2944 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.525702    2944 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.525862    2944 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.525872    2944 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:38.525882    2944 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.525888    2944 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.525898    2944 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:38.526073    2944 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:38.526083    2944 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.526091    2944 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-843000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 34ae51216c2372afb4b95ec9f3e36ba7e450accb7a586200cb9e1e8461a896ea
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-843000
size: "30"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-843000
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-843000 image ls --format yaml --alsologtostderr:
I0818 11:58:36.113520    2927 out.go:345] Setting OutFile to fd 1 ...
I0818 11:58:36.113711    2927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:36.113716    2927 out.go:358] Setting ErrFile to fd 2...
I0818 11:58:36.113720    2927 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:36.113892    2927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
I0818 11:58:36.114474    2927 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:36.114574    2927 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:36.114954    2927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:36.115000    2927 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:36.123281    2927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50867
I0818 11:58:36.123682    2927 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:36.124102    2927 main.go:141] libmachine: Using API Version  1
I0818 11:58:36.124134    2927 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:36.124374    2927 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:36.124516    2927 main.go:141] libmachine: (functional-843000) Calling .GetState
I0818 11:58:36.124607    2927 main.go:141] libmachine: (functional-843000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0818 11:58:36.124670    2927 main.go:141] libmachine: (functional-843000) DBG | hyperkit pid from json: 2203
I0818 11:58:36.125928    2927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:36.125953    2927 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:36.134119    2927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50869
I0818 11:58:36.134461    2927 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:36.134806    2927 main.go:141] libmachine: Using API Version  1
I0818 11:58:36.134824    2927 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:36.135042    2927 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:36.135147    2927 main.go:141] libmachine: (functional-843000) Calling .DriverName
I0818 11:58:36.135290    2927 ssh_runner.go:195] Run: systemctl --version
I0818 11:58:36.135310    2927 main.go:141] libmachine: (functional-843000) Calling .GetSSHHostname
I0818 11:58:36.135390    2927 main.go:141] libmachine: (functional-843000) Calling .GetSSHPort
I0818 11:58:36.135490    2927 main.go:141] libmachine: (functional-843000) Calling .GetSSHKeyPath
I0818 11:58:36.135576    2927 main.go:141] libmachine: (functional-843000) Calling .GetSSHUsername
I0818 11:58:36.135686    2927 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/functional-843000/id_rsa Username:docker}
I0818 11:58:36.164295    2927 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 11:58:36.181846    2927 main.go:141] libmachine: Making call to close driver server
I0818 11:58:36.181860    2927 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:36.182008    2927 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:36.182017    2927 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:36.182021    2927 main.go:141] libmachine: Making call to close driver server
I0818 11:58:36.182026    2927 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:36.182032    2927 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:36.182172    2927 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:36.182177    2927 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:36.182189    2927 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh pgrep buildkitd: exit status 1 (123.674109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr: (1.914309098s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-843000 image build -t localhost/my-image:functional-843000 testdata/build --alsologtostderr:
I0818 11:58:36.385464    2936 out.go:345] Setting OutFile to fd 1 ...
I0818 11:58:36.385745    2936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:36.385750    2936 out.go:358] Setting ErrFile to fd 2...
I0818 11:58:36.385754    2936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 11:58:36.385948    2936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
I0818 11:58:36.386580    2936 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:36.387546    2936 config.go:182] Loaded profile config "functional-843000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 11:58:36.387888    2936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:36.387934    2936 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:36.396302    2936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
I0818 11:58:36.396714    2936 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:36.397143    2936 main.go:141] libmachine: Using API Version  1
I0818 11:58:36.397154    2936 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:36.397371    2936 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:36.397491    2936 main.go:141] libmachine: (functional-843000) Calling .GetState
I0818 11:58:36.397588    2936 main.go:141] libmachine: (functional-843000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0818 11:58:36.397703    2936 main.go:141] libmachine: (functional-843000) DBG | hyperkit pid from json: 2203
I0818 11:58:36.398948    2936 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0818 11:58:36.398972    2936 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0818 11:58:36.407352    2936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50882
I0818 11:58:36.407723    2936 main.go:141] libmachine: () Calling .GetVersion
I0818 11:58:36.408034    2936 main.go:141] libmachine: Using API Version  1
I0818 11:58:36.408042    2936 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 11:58:36.408252    2936 main.go:141] libmachine: () Calling .GetMachineName
I0818 11:58:36.408382    2936 main.go:141] libmachine: (functional-843000) Calling .DriverName
I0818 11:58:36.408532    2936 ssh_runner.go:195] Run: systemctl --version
I0818 11:58:36.408554    2936 main.go:141] libmachine: (functional-843000) Calling .GetSSHHostname
I0818 11:58:36.408642    2936 main.go:141] libmachine: (functional-843000) Calling .GetSSHPort
I0818 11:58:36.408732    2936 main.go:141] libmachine: (functional-843000) Calling .GetSSHKeyPath
I0818 11:58:36.408818    2936 main.go:141] libmachine: (functional-843000) Calling .GetSSHUsername
I0818 11:58:36.408907    2936 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/functional-843000/id_rsa Username:docker}
I0818 11:58:36.438948    2936 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.675964131.tar
I0818 11:58:36.439021    2936 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0818 11:58:36.447419    2936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.675964131.tar
I0818 11:58:36.450694    2936 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.675964131.tar: stat -c "%s %y" /var/lib/minikube/build/build.675964131.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.675964131.tar': No such file or directory
I0818 11:58:36.450717    2936 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.675964131.tar --> /var/lib/minikube/build/build.675964131.tar (3072 bytes)
I0818 11:58:36.471366    2936 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.675964131
I0818 11:58:36.479703    2936 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.675964131 -xf /var/lib/minikube/build/build.675964131.tar
I0818 11:58:36.487808    2936 docker.go:360] Building image: /var/lib/minikube/build/build.675964131
I0818 11:58:36.487870    2936 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-843000 /var/lib/minikube/build/build.675964131
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:371c37bb61ff10bdcdcca32437b54fba1b2220d5f6db14033a9714b0110be0a3 done
#8 naming to localhost/my-image:functional-843000 done
#8 DONE 0.0s
I0818 11:58:38.203155    2936 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-843000 /var/lib/minikube/build/build.675964131: (1.715277742s)
I0818 11:58:38.203225    2936 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.675964131
I0818 11:58:38.211855    2936 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.675964131.tar
I0818 11:58:38.219714    2936 build_images.go:217] Built localhost/my-image:functional-843000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.675964131.tar
I0818 11:58:38.219735    2936 build_images.go:133] succeeded building to: functional-843000
I0818 11:58:38.219740    2936 build_images.go:134] failed building to: 
I0818 11:58:38.219759    2936 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.219765    2936 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.219906    2936 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
I0818 11:58:38.219938    2936 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.219965    2936 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:38.219976    2936 main.go:141] libmachine: Making call to close driver server
I0818 11:58:38.219982    2936 main.go:141] libmachine: (functional-843000) Calling .Close
I0818 11:58:38.220170    2936 main.go:141] libmachine: Successfully made call to close driver server
I0818 11:58:38.220179    2936 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 11:58:38.220183    2936 main.go:141] libmachine: (functional-843000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.797325773s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-843000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-843000 docker-env) && out/minikube-darwin-amd64 status -p functional-843000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-843000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-843000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image load --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image save kicbase/echo-server:functional-843000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image rm kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-843000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 image save --daemon kicbase/echo-server:functional-843000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-843000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-843000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-843000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-988qt" [c4a46694-b0c0-4acf-8c37-c7f729a11f0b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-988qt" [c4a46694-b0c0-4acf-8c37-c7f729a11f0b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.006013246s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service list -o json
functional_test.go:1494: Took "215.611915ms" to run "out/minikube-darwin-amd64 -p functional-843000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.4:30653
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.4:30653
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2654: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-843000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2c34d95b-a1af-4857-9060-cf01ba7d624d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2c34d95b-a1af-4857-9060-cf01ba7d624d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.001873672s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-843000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.220.51 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-843000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "231.669254ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "79.776413ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "179.733926ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "78.571808ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port578454362/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724007506100407000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port578454362/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724007506100407000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port578454362/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724007506100407000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port578454362/001/test-1724007506100407000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (119.271882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 18 18:58 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 18 18:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 18 18:58 test-1724007506100407000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh cat /mount-9p/test-1724007506100407000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-843000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d72c830a-4462-415a-9aa3-90c2c0809cd2] Pending
helpers_test.go:344: "busybox-mount" [d72c830a-4462-415a-9aa3-90c2c0809cd2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d72c830a-4462-415a-9aa3-90c2c0809cd2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d72c830a-4462-415a-9aa3-90c2c0809cd2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00319115s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-843000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port578454362/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3391355063/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (119.642893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3391355063/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3391355063/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T" /mount1: exit status 1 (264.059542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-843000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-843000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-843000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3664400051/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-843000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-843000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-843000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-373000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0818 12:01:48.084274    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-373000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m20.736448782s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-373000 -- rollout status deployment/busybox: (2.619932575s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-85gjs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hdg8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hxp7z -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-85gjs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hdg8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hxp7z -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-85gjs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hdg8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hxp7z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-85gjs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-85gjs -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hdg8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hdg8r -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hxp7z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-373000 -- exec busybox-7dff88458-hxp7z -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-373000 -v=7 --alsologtostderr
E0818 12:02:37.593187    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.600490    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.612509    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.635304    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.678116    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.760526    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:37.922484    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:38.243819    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:38.885023    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:40.168090    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:42.730541    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:47.852583    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:02:58.094709    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-373000 -v=7 --alsologtostderr: (49.404651247s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-373000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp testdata/cp-test.txt ha-373000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000:/home/docker/cp-test.txt ha-373000-m02:/home/docker/cp-test_ha-373000_ha-373000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test_ha-373000_ha-373000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000:/home/docker/cp-test.txt ha-373000-m03:/home/docker/cp-test_ha-373000_ha-373000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test_ha-373000_ha-373000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000:/home/docker/cp-test.txt ha-373000-m04:/home/docker/cp-test_ha-373000_ha-373000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test_ha-373000_ha-373000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp testdata/cp-test.txt ha-373000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m02:/home/docker/cp-test.txt ha-373000:/home/docker/cp-test_ha-373000-m02_ha-373000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test_ha-373000-m02_ha-373000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m02:/home/docker/cp-test.txt ha-373000-m03:/home/docker/cp-test_ha-373000-m02_ha-373000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test_ha-373000-m02_ha-373000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m02:/home/docker/cp-test.txt ha-373000-m04:/home/docker/cp-test_ha-373000-m02_ha-373000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test_ha-373000-m02_ha-373000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp testdata/cp-test.txt ha-373000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt ha-373000:/home/docker/cp-test_ha-373000-m03_ha-373000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test_ha-373000-m03_ha-373000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt ha-373000-m02:/home/docker/cp-test_ha-373000-m03_ha-373000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test_ha-373000-m03_ha-373000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m03:/home/docker/cp-test.txt ha-373000-m04:/home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test_ha-373000-m03_ha-373000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp testdata/cp-test.txt ha-373000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1633039751/001/cp-test_ha-373000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt ha-373000:/home/docker/cp-test_ha-373000-m04_ha-373000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000 "sudo cat /home/docker/cp-test_ha-373000-m04_ha-373000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt ha-373000-m02:/home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m02 "sudo cat /home/docker/cp-test_ha-373000-m04_ha-373000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 cp ha-373000-m04:/home/docker/cp-test.txt ha-373000-m03:/home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 ssh -n ha-373000-m03 "sudo cat /home/docker/cp-test_ha-373000-m04_ha-373000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 node stop m02 -v=7 --alsologtostderr
E0818 12:03:18.577667    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 node stop m02 -v=7 --alsologtostderr: (8.342906341s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr: exit status 7 (360.259999ms)

                                                
                                                
-- stdout --
	ha-373000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-373000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-373000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-373000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:03:25.552147    3753 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:03:25.552343    3753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:03:25.552350    3753 out.go:358] Setting ErrFile to fd 2...
	I0818 12:03:25.552354    3753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:03:25.552546    3753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:03:25.552721    3753 out.go:352] Setting JSON to false
	I0818 12:03:25.552744    3753 mustload.go:65] Loading cluster: ha-373000
	I0818 12:03:25.552783    3753 notify.go:220] Checking for updates...
	I0818 12:03:25.553090    3753 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:03:25.553105    3753 status.go:255] checking status of ha-373000 ...
	I0818 12:03:25.553445    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.553501    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.562399    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51636
	I0818 12:03:25.562781    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.563230    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.563243    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.563437    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.563545    3753 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:03:25.563629    3753 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:03:25.563710    3753 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 2975
	I0818 12:03:25.564687    3753 status.go:330] ha-373000 host status = "Running" (err=<nil>)
	I0818 12:03:25.564709    3753 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:03:25.564953    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.564973    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.573272    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51638
	I0818 12:03:25.573640    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.574032    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.574049    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.574257    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.574369    3753 main.go:141] libmachine: (ha-373000) Calling .GetIP
	I0818 12:03:25.574448    3753 host.go:66] Checking if "ha-373000" exists ...
	I0818 12:03:25.574696    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.574718    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.585800    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51640
	I0818 12:03:25.586174    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.586493    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.586520    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.586733    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.586865    3753 main.go:141] libmachine: (ha-373000) Calling .DriverName
	I0818 12:03:25.587018    3753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:03:25.587039    3753 main.go:141] libmachine: (ha-373000) Calling .GetSSHHostname
	I0818 12:03:25.587126    3753 main.go:141] libmachine: (ha-373000) Calling .GetSSHPort
	I0818 12:03:25.587220    3753 main.go:141] libmachine: (ha-373000) Calling .GetSSHKeyPath
	I0818 12:03:25.587316    3753 main.go:141] libmachine: (ha-373000) Calling .GetSSHUsername
	I0818 12:03:25.587402    3753 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000/id_rsa Username:docker}
	I0818 12:03:25.621036    3753 ssh_runner.go:195] Run: systemctl --version
	I0818 12:03:25.625328    3753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:03:25.636340    3753 kubeconfig.go:125] found "ha-373000" server: "https://192.169.0.254:8443"
	I0818 12:03:25.636364    3753 api_server.go:166] Checking apiserver status ...
	I0818 12:03:25.636399    3753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:03:25.647357    3753 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2041/cgroup
	W0818 12:03:25.656200    3753 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2041/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:03:25.656251    3753 ssh_runner.go:195] Run: ls
	I0818 12:03:25.659292    3753 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0818 12:03:25.663211    3753 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0818 12:03:25.663223    3753 status.go:422] ha-373000 apiserver status = Running (err=<nil>)
	I0818 12:03:25.663233    3753 status.go:257] ha-373000 status: &{Name:ha-373000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:03:25.663243    3753 status.go:255] checking status of ha-373000-m02 ...
	I0818 12:03:25.663495    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.663515    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.672274    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51644
	I0818 12:03:25.672637    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.672959    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.672970    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.673185    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.673305    3753 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:03:25.673390    3753 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:03:25.673463    3753 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 2991
	I0818 12:03:25.674434    3753 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 2991 missing from process table
	I0818 12:03:25.674467    3753 status.go:330] ha-373000-m02 host status = "Stopped" (err=<nil>)
	I0818 12:03:25.674476    3753 status.go:343] host is not running, skipping remaining checks
	I0818 12:03:25.674484    3753 status.go:257] ha-373000-m02 status: &{Name:ha-373000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:03:25.674498    3753 status.go:255] checking status of ha-373000-m03 ...
	I0818 12:03:25.674748    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.674771    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.683245    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51646
	I0818 12:03:25.683617    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.683934    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.683946    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.684144    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.684247    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetState
	I0818 12:03:25.684325    3753 main.go:141] libmachine: (ha-373000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:03:25.684404    3753 main.go:141] libmachine: (ha-373000-m03) DBG | hyperkit pid from json: 3309
	I0818 12:03:25.685375    3753 status.go:330] ha-373000-m03 host status = "Running" (err=<nil>)
	I0818 12:03:25.685386    3753 host.go:66] Checking if "ha-373000-m03" exists ...
	I0818 12:03:25.685643    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.685663    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.694345    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51648
	I0818 12:03:25.694705    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.695016    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.695026    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.695215    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.695323    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetIP
	I0818 12:03:25.695413    3753 host.go:66] Checking if "ha-373000-m03" exists ...
	I0818 12:03:25.695664    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.695690    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.704193    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51650
	I0818 12:03:25.704555    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.704874    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.704893    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.705108    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.705205    3753 main.go:141] libmachine: (ha-373000-m03) Calling .DriverName
	I0818 12:03:25.705333    3753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:03:25.705344    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHHostname
	I0818 12:03:25.705419    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHPort
	I0818 12:03:25.705497    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHKeyPath
	I0818 12:03:25.705576    3753 main.go:141] libmachine: (ha-373000-m03) Calling .GetSSHUsername
	I0818 12:03:25.705652    3753 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m03/id_rsa Username:docker}
	I0818 12:03:25.742867    3753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:03:25.754424    3753 kubeconfig.go:125] found "ha-373000" server: "https://192.169.0.254:8443"
	I0818 12:03:25.754442    3753 api_server.go:166] Checking apiserver status ...
	I0818 12:03:25.754484    3753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:03:25.765958    3753 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1907/cgroup
	W0818 12:03:25.773242    3753 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1907/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:03:25.773296    3753 ssh_runner.go:195] Run: ls
	I0818 12:03:25.776541    3753 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0818 12:03:25.779673    3753 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0818 12:03:25.779685    3753 status.go:422] ha-373000-m03 apiserver status = Running (err=<nil>)
	I0818 12:03:25.779693    3753 status.go:257] ha-373000-m03 status: &{Name:ha-373000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:03:25.779703    3753 status.go:255] checking status of ha-373000-m04 ...
	I0818 12:03:25.779971    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.779991    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.788733    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51654
	I0818 12:03:25.789120    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.789466    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.789478    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.789706    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.789820    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:03:25.789900    3753 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:03:25.789983    3753 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3421
	I0818 12:03:25.790954    3753 status.go:330] ha-373000-m04 host status = "Running" (err=<nil>)
	I0818 12:03:25.790964    3753 host.go:66] Checking if "ha-373000-m04" exists ...
	I0818 12:03:25.791231    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.791252    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.799714    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51656
	I0818 12:03:25.800072    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.800421    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.800436    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.800632    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.800742    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetIP
	I0818 12:03:25.800832    3753 host.go:66] Checking if "ha-373000-m04" exists ...
	I0818 12:03:25.801106    3753 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:03:25.801129    3753 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:03:25.809476    3753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51658
	I0818 12:03:25.809807    3753 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:03:25.810131    3753 main.go:141] libmachine: Using API Version  1
	I0818 12:03:25.810140    3753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:03:25.810352    3753 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:03:25.810464    3753 main.go:141] libmachine: (ha-373000-m04) Calling .DriverName
	I0818 12:03:25.810588    3753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:03:25.810599    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHHostname
	I0818 12:03:25.810681    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHPort
	I0818 12:03:25.810773    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHKeyPath
	I0818 12:03:25.810870    3753 main.go:141] libmachine: (ha-373000-m04) Calling .GetSSHUsername
	I0818 12:03:25.810968    3753 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/ha-373000-m04/id_rsa Username:docker}
	I0818 12:03:25.846095    3753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:03:25.856333    3753 status.go:257] ha-373000-m04 status: &{Name:ha-373000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 node start m02 -v=7 --alsologtostderr
E0818 12:03:59.538920    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 node start m02 -v=7 --alsologtostderr: (37.756840748s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-373000 stop -v=7 --alsologtostderr: (24.888722102s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-373000 status -v=7 --alsologtostderr: exit status 7 (91.194894ms)

                                                
                                                
-- stdout --
	ha-373000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-373000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-373000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:09:00.298416    3971 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:09:00.298696    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.298701    3971 out.go:358] Setting ErrFile to fd 2...
	I0818 12:09:00.298705    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:09:00.298883    3971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:09:00.299060    3971 out.go:352] Setting JSON to false
	I0818 12:09:00.299086    3971 mustload.go:65] Loading cluster: ha-373000
	I0818 12:09:00.299124    3971 notify.go:220] Checking for updates...
	I0818 12:09:00.299387    3971 config.go:182] Loaded profile config "ha-373000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:09:00.299403    3971 status.go:255] checking status of ha-373000 ...
	I0818 12:09:00.299782    3971 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.299833    3971 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.309064    3971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52007
	I0818 12:09:00.309444    3971 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.309863    3971 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.309888    3971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.310112    3971 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.310231    3971 main.go:141] libmachine: (ha-373000) Calling .GetState
	I0818 12:09:00.310336    3971 main.go:141] libmachine: (ha-373000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.310398    3971 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid from json: 3836
	I0818 12:09:00.311276    3971 main.go:141] libmachine: (ha-373000) DBG | hyperkit pid 3836 missing from process table
	I0818 12:09:00.311300    3971 status.go:330] ha-373000 host status = "Stopped" (err=<nil>)
	I0818 12:09:00.311307    3971 status.go:343] host is not running, skipping remaining checks
	I0818 12:09:00.311314    3971 status.go:257] ha-373000 status: &{Name:ha-373000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:09:00.311333    3971 status.go:255] checking status of ha-373000-m02 ...
	I0818 12:09:00.311593    3971 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.311614    3971 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.319992    3971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52009
	I0818 12:09:00.320306    3971 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.320694    3971 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.320717    3971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.320916    3971 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.321040    3971 main.go:141] libmachine: (ha-373000-m02) Calling .GetState
	I0818 12:09:00.321123    3971 main.go:141] libmachine: (ha-373000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.321199    3971 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid from json: 3847
	I0818 12:09:00.322083    3971 main.go:141] libmachine: (ha-373000-m02) DBG | hyperkit pid 3847 missing from process table
	I0818 12:09:00.322123    3971 status.go:330] ha-373000-m02 host status = "Stopped" (err=<nil>)
	I0818 12:09:00.322133    3971 status.go:343] host is not running, skipping remaining checks
	I0818 12:09:00.322142    3971 status.go:257] ha-373000-m02 status: &{Name:ha-373000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:09:00.322151    3971 status.go:255] checking status of ha-373000-m04 ...
	I0818 12:09:00.322385    3971 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:09:00.322405    3971 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:09:00.331306    3971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52011
	I0818 12:09:00.331681    3971 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:09:00.332030    3971 main.go:141] libmachine: Using API Version  1
	I0818 12:09:00.332040    3971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:09:00.332241    3971 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:09:00.332340    3971 main.go:141] libmachine: (ha-373000-m04) Calling .GetState
	I0818 12:09:00.332418    3971 main.go:141] libmachine: (ha-373000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:09:00.332498    3971 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid from json: 3877
	I0818 12:09:00.333404    3971 main.go:141] libmachine: (ha-373000-m04) DBG | hyperkit pid 3877 missing from process table
	I0818 12:09:00.333412    3971 status.go:330] ha-373000-m04 host status = "Stopped" (err=<nil>)
	I0818 12:09:00.333418    3971 status.go:343] host is not running, skipping remaining checks
	I0818 12:09:00.333424    3971 status.go:257] ha-373000-m04 status: &{Name:ha-373000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-784000 --driver=hyperkit 
E0818 12:11:48.065524    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-784000 --driver=hyperkit : (37.538685925s)
--- PASS: TestImageBuild/serial/Setup (37.54s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-784000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-784000: (1.660083479s)
--- PASS: TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-784000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-784000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-784000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-301000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0818 12:12:37.574413    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-301000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (47.855421298s)
--- PASS: TestJSONOutput/start/Command (47.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-301000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-301000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-301000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-301000 --output=json --user=testUser: (8.316649544s)
--- PASS: TestJSONOutput/stop/Command (8.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-965000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-965000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (361.346086ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d593983-57e9-4f52-bb1f-de82aed6b236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-965000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca5c0854-fd3a-4232-b528-366cda394fa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"8b9281c6-2da5-4a48-aadc-5840e893a85b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig"}}
	{"specversion":"1.0","id":"e9df6584-6b3d-4c10-96b8-79555aae96b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ed8f31ec-b3cf-46ad-bc2a-341c86662c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"52882246-fae2-4771-8dce-6580b22a091a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube"}}
	{"specversion":"1.0","id":"01ed3078-ee80-4b3a-9ea1-841138f12eec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3ff9dde1-2d96-498b-aca2-fddc7541b787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-965000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (90.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-818000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-818000 --driver=hyperkit : (39.826362504s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-820000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-820000 --driver=hyperkit : (41.097127966s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-818000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-820000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-820000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-820000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-820000: (3.397047654s)
helpers_test.go:175: Cleaning up "first-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-818000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-818000: (5.236019072s)
--- PASS: TestMinikubeProfile (90.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-770000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0818 12:17:37.565458    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-770000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m46.701298468s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-770000 -- rollout status deployment/busybox: (2.552781542s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-7gspc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-wrqcd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-7gspc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-wrqcd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-7gspc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-wrqcd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-7gspc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-7gspc -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-wrqcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-770000 -- exec busybox-7dff88458-wrqcd -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-770000 -v 3 --alsologtostderr
E0818 12:19:00.638134    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-770000 -v 3 --alsologtostderr: (45.312184316s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-770000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp testdata/cp-test.txt multinode-770000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1652684765/001/cp-test_multinode-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000:/home/docker/cp-test.txt multinode-770000-m02:/home/docker/cp-test_multinode-770000_multinode-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test_multinode-770000_multinode-770000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000:/home/docker/cp-test.txt multinode-770000-m03:/home/docker/cp-test_multinode-770000_multinode-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test_multinode-770000_multinode-770000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp testdata/cp-test.txt multinode-770000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1652684765/001/cp-test_multinode-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m02:/home/docker/cp-test.txt multinode-770000:/home/docker/cp-test_multinode-770000-m02_multinode-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test_multinode-770000-m02_multinode-770000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m02:/home/docker/cp-test.txt multinode-770000-m03:/home/docker/cp-test_multinode-770000-m02_multinode-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test_multinode-770000-m02_multinode-770000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp testdata/cp-test.txt multinode-770000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile1652684765/001/cp-test_multinode-770000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m03:/home/docker/cp-test.txt multinode-770000:/home/docker/cp-test_multinode-770000-m03_multinode-770000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000 "sudo cat /home/docker/cp-test_multinode-770000-m03_multinode-770000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 cp multinode-770000-m03:/home/docker/cp-test.txt multinode-770000-m02:/home/docker/cp-test_multinode-770000-m03_multinode-770000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 ssh -n multinode-770000-m02 "sudo cat /home/docker/cp-test_multinode-770000-m03_multinode-770000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-770000 node stop m03: (2.337493019s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-770000 status: exit status 7 (249.974971ms)

                                                
                                                
-- stdout --
	multinode-770000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-770000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-770000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr: exit status 7 (249.01872ms)

                                                
                                                
-- stdout --
	multinode-770000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-770000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-770000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:19:37.428091    4705 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:19:37.428867    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:19:37.428876    4705 out.go:358] Setting ErrFile to fd 2...
	I0818 12:19:37.428883    4705 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:19:37.429379    4705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:19:37.429577    4705 out.go:352] Setting JSON to false
	I0818 12:19:37.429600    4705 mustload.go:65] Loading cluster: multinode-770000
	I0818 12:19:37.429629    4705 notify.go:220] Checking for updates...
	I0818 12:19:37.429883    4705 config.go:182] Loaded profile config "multinode-770000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:19:37.429898    4705 status.go:255] checking status of multinode-770000 ...
	I0818 12:19:37.430274    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.430316    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.439345    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52930
	I0818 12:19:37.439679    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.440088    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.440100    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.440329    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.440454    4705 main.go:141] libmachine: (multinode-770000) Calling .GetState
	I0818 12:19:37.440529    4705 main.go:141] libmachine: (multinode-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:19:37.440603    4705 main.go:141] libmachine: (multinode-770000) DBG | hyperkit pid from json: 4399
	I0818 12:19:37.441753    4705 status.go:330] multinode-770000 host status = "Running" (err=<nil>)
	I0818 12:19:37.441771    4705 host.go:66] Checking if "multinode-770000" exists ...
	I0818 12:19:37.442011    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.442034    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.450331    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52932
	I0818 12:19:37.450677    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.451006    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.451015    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.451206    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.451357    4705 main.go:141] libmachine: (multinode-770000) Calling .GetIP
	I0818 12:19:37.451456    4705 host.go:66] Checking if "multinode-770000" exists ...
	I0818 12:19:37.451707    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.451733    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.461254    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52934
	I0818 12:19:37.461692    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.462066    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.462095    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.462289    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.462408    4705 main.go:141] libmachine: (multinode-770000) Calling .DriverName
	I0818 12:19:37.462568    4705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:19:37.462592    4705 main.go:141] libmachine: (multinode-770000) Calling .GetSSHHostname
	I0818 12:19:37.462672    4705 main.go:141] libmachine: (multinode-770000) Calling .GetSSHPort
	I0818 12:19:37.462774    4705 main.go:141] libmachine: (multinode-770000) Calling .GetSSHKeyPath
	I0818 12:19:37.462859    4705 main.go:141] libmachine: (multinode-770000) Calling .GetSSHUsername
	I0818 12:19:37.462944    4705 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/multinode-770000/id_rsa Username:docker}
	I0818 12:19:37.495183    4705 ssh_runner.go:195] Run: systemctl --version
	I0818 12:19:37.499321    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:19:37.511102    4705 kubeconfig.go:125] found "multinode-770000" server: "https://192.169.0.13:8443"
	I0818 12:19:37.511128    4705 api_server.go:166] Checking apiserver status ...
	I0818 12:19:37.511168    4705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 12:19:37.521721    4705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1895/cgroup
	W0818 12:19:37.528849    4705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1895/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 12:19:37.528899    4705 ssh_runner.go:195] Run: ls
	I0818 12:19:37.531978    4705 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0818 12:19:37.535049    4705 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0818 12:19:37.535060    4705 status.go:422] multinode-770000 apiserver status = Running (err=<nil>)
	I0818 12:19:37.535070    4705 status.go:257] multinode-770000 status: &{Name:multinode-770000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:19:37.535080    4705 status.go:255] checking status of multinode-770000-m02 ...
	I0818 12:19:37.535329    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.535350    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.544028    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52938
	I0818 12:19:37.544377    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.544722    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.544739    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.544936    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.545046    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetState
	I0818 12:19:37.545124    4705 main.go:141] libmachine: (multinode-770000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:19:37.545202    4705 main.go:141] libmachine: (multinode-770000-m02) DBG | hyperkit pid from json: 4423
	I0818 12:19:37.546335    4705 status.go:330] multinode-770000-m02 host status = "Running" (err=<nil>)
	I0818 12:19:37.546345    4705 host.go:66] Checking if "multinode-770000-m02" exists ...
	I0818 12:19:37.546609    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.546631    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.555277    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52940
	I0818 12:19:37.555650    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.556005    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.556023    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.556254    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.556366    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetIP
	I0818 12:19:37.556481    4705 host.go:66] Checking if "multinode-770000-m02" exists ...
	I0818 12:19:37.556744    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.556768    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.565301    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52942
	I0818 12:19:37.565647    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.565977    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.565995    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.566212    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.566318    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .DriverName
	I0818 12:19:37.566438    4705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 12:19:37.566450    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetSSHHostname
	I0818 12:19:37.566531    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetSSHPort
	I0818 12:19:37.566601    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetSSHKeyPath
	I0818 12:19:37.566689    4705 main.go:141] libmachine: (multinode-770000-m02) Calling .GetSSHUsername
	I0818 12:19:37.566773    4705 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19423-1007/.minikube/machines/multinode-770000-m02/id_rsa Username:docker}
	I0818 12:19:37.599660    4705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 12:19:37.609998    4705 status.go:257] multinode-770000-m02 status: &{Name:multinode-770000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:19:37.610023    4705 status.go:255] checking status of multinode-770000-m03 ...
	I0818 12:19:37.610313    4705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:19:37.610337    4705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:19:37.619040    4705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52945
	I0818 12:19:37.619389    4705 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:19:37.619734    4705 main.go:141] libmachine: Using API Version  1
	I0818 12:19:37.619750    4705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:19:37.619950    4705 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:19:37.620056    4705 main.go:141] libmachine: (multinode-770000-m03) Calling .GetState
	I0818 12:19:37.620132    4705 main.go:141] libmachine: (multinode-770000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:19:37.620203    4705 main.go:141] libmachine: (multinode-770000-m03) DBG | hyperkit pid from json: 4496
	I0818 12:19:37.621333    4705 main.go:141] libmachine: (multinode-770000-m03) DBG | hyperkit pid 4496 missing from process table
	I0818 12:19:37.621364    4705 status.go:330] multinode-770000-m03 host status = "Stopped" (err=<nil>)
	I0818 12:19:37.621369    4705 status.go:343] host is not running, skipping remaining checks
	I0818 12:19:37.621377    4705 status.go:257] multinode-770000-m03 status: &{Name:multinode-770000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-770000 node start m03 -v=7 --alsologtostderr: (36.313589995s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (204.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-770000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-770000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-770000: (18.853051828s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-770000 --wait=true -v=8 --alsologtostderr
E0818 12:21:48.048989    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:22:37.555755    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-770000 --wait=true -v=8 --alsologtostderr: (3m5.414061394s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-770000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (204.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-770000 node delete m03: (2.924639551s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-770000 stop: (16.631282988s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-770000 status: exit status 7 (85.866445ms)

                                                
                                                
-- stdout --
	multinode-770000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-770000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr: exit status 7 (79.479764ms)

                                                
                                                
-- stdout --
	multinode-770000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-770000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 12:23:58.707596    4849 out.go:345] Setting OutFile to fd 1 ...
	I0818 12:23:58.707862    4849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:58.707868    4849 out.go:358] Setting ErrFile to fd 2...
	I0818 12:23:58.707872    4849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 12:23:58.708058    4849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19423-1007/.minikube/bin
	I0818 12:23:58.708247    4849 out.go:352] Setting JSON to false
	I0818 12:23:58.708268    4849 mustload.go:65] Loading cluster: multinode-770000
	I0818 12:23:58.708309    4849 notify.go:220] Checking for updates...
	I0818 12:23:58.708574    4849 config.go:182] Loaded profile config "multinode-770000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 12:23:58.708589    4849 status.go:255] checking status of multinode-770000 ...
	I0818 12:23:58.708942    4849 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:23:58.709002    4849 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:23:58.717808    4849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53176
	I0818 12:23:58.718231    4849 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:23:58.718646    4849 main.go:141] libmachine: Using API Version  1
	I0818 12:23:58.718675    4849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:23:58.718884    4849 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:23:58.719007    4849 main.go:141] libmachine: (multinode-770000) Calling .GetState
	I0818 12:23:58.719095    4849 main.go:141] libmachine: (multinode-770000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:23:58.719168    4849 main.go:141] libmachine: (multinode-770000) DBG | hyperkit pid from json: 4769
	I0818 12:23:58.720028    4849 main.go:141] libmachine: (multinode-770000) DBG | hyperkit pid 4769 missing from process table
	I0818 12:23:58.720068    4849 status.go:330] multinode-770000 host status = "Stopped" (err=<nil>)
	I0818 12:23:58.720077    4849 status.go:343] host is not running, skipping remaining checks
	I0818 12:23:58.720083    4849 status.go:257] multinode-770000 status: &{Name:multinode-770000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 12:23:58.720103    4849 status.go:255] checking status of multinode-770000-m02 ...
	I0818 12:23:58.720346    4849 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0818 12:23:58.720366    4849 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0818 12:23:58.728875    4849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53178
	I0818 12:23:58.729202    4849 main.go:141] libmachine: () Calling .GetVersion
	I0818 12:23:58.729507    4849 main.go:141] libmachine: Using API Version  1
	I0818 12:23:58.729521    4849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 12:23:58.729744    4849 main.go:141] libmachine: () Calling .GetMachineName
	I0818 12:23:58.729863    4849 main.go:141] libmachine: (multinode-770000-m02) Calling .GetState
	I0818 12:23:58.729960    4849 main.go:141] libmachine: (multinode-770000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0818 12:23:58.730031    4849 main.go:141] libmachine: (multinode-770000-m02) DBG | hyperkit pid from json: 4784
	I0818 12:23:58.730909    4849 main.go:141] libmachine: (multinode-770000-m02) DBG | hyperkit pid 4784 missing from process table
	I0818 12:23:58.730935    4849 status.go:330] multinode-770000-m02 host status = "Stopped" (err=<nil>)
	I0818 12:23:58.730943    4849 status.go:343] host is not running, skipping remaining checks
	I0818 12:23:58.730950    4849 status.go:257] multinode-770000-m02 status: &{Name:multinode-770000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (130.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-770000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0818 12:24:51.125528    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-770000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m9.969390871s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-770000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (130.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-770000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-770000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-770000-m02 --driver=hyperkit : exit status 14 (398.818527ms)

                                                
                                                
-- stdout --
	* [multinode-770000-m02] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-770000-m02' is duplicated with machine name 'multinode-770000-m02' in profile 'multinode-770000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-770000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-770000-m03 --driver=hyperkit : (38.112035461s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-770000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-770000: exit status 80 (369.228913ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-770000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-770000-m03 already exists in multinode-770000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-770000-m03
E0818 12:26:48.037553    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-770000-m03: (5.310059622s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.25s)

                                                
                                    
x
+
TestPreload (152.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0818 12:27:37.548270    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-146000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m26.829334826s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-146000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-146000 image pull gcr.io/k8s-minikube/busybox: (1.311833622s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-146000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-146000: (8.385315446s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-146000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-146000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (50.180832897s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-146000 image list
helpers_test.go:175: Cleaning up "test-preload-146000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-146000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-146000: (5.241211653s)
--- PASS: TestPreload (152.10s)

                                                
                                    
x
+
TestSkaffold (114.11s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3113781813 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3113781813 version: (1.760016663s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-977000 --memory=2600 --driver=hyperkit 
E0818 12:32:37.618469    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-977000 --memory=2600 --driver=hyperkit : (39.790412266s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3113781813 run --minikube-profile skaffold-977000 --kube-context skaffold-977000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3113781813 run --minikube-profile skaffold-977000 --kube-context skaffold-977000 --status-check=true --port-forward=false --interactive=false: (54.905317642s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d5bf45c56-mp494" [2906482d-9ee7-416c-80f4-837c3e5c3664] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003956684s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-568bd69bd4-4xkcs" [7d3ea4b4-c5af-41ad-97d6-9146a541951d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003834613s
helpers_test.go:175: Cleaning up "skaffold-977000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-977000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-977000: (5.245902722s)
--- PASS: TestSkaffold (114.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3133052745 start -p running-upgrade-771000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3133052745 start -p running-upgrade-771000 --memory=2200 --vm-driver=hyperkit : (56.50919202s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-771000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-771000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (25.778044547s)
helpers_test.go:175: Cleaning up "running-upgrade-771000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-771000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-771000: (5.266909254s)
--- PASS: TestRunningBinaryUpgrade (88.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (124.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0818 12:46:48.088920    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (52.854244909s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-490000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-490000: (8.368222282s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-490000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-490000 status --format={{.Host}}: exit status 7 (67.758859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0818 12:47:37.598670    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (33.822332383s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-490000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (478.799238ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-490000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-490000
	    minikube start -p kubernetes-upgrade-490000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4900002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-490000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-490000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (24.851766815s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-490000
E0818 12:48:35.891460    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-490000: (3.551519229s)
--- PASS: TestKubernetesUpgrade (124.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.08s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current330533969/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current330533969/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current330533969/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current330533969/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.08s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.91s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19423
- KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1090808945/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1090808945/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1090808945/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1090808945/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1301.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1654526078 start -p stopped-upgrade-627000 --memory=2200 --vm-driver=hyperkit 
E0818 12:51:48.081644    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:52:20.752264    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:52:37.671425    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:53:35.963638    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:54:59.040500    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:56:48.156801    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:57:37.667140    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:58:11.244813    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 12:58:35.959976    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1654526078 start -p stopped-upgrade-627000 --memory=2200 --vm-driver=hyperkit : (10m45.544259858s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1654526078 -p stopped-upgrade-627000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1654526078 -p stopped-upgrade-627000 stop: (8.266232815s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-627000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0818 13:01:48.152152    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:02:37.663298    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:03:35.955376    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:06:48.148944    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:07:37.658261    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:08:35.952276    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:09:00.741930    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-627000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (10m47.853469756s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (1301.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-627000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-627000: (3.154514949s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (621.942877ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-083000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19423
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19423-1007/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19423-1007/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-083000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-083000 --driver=hyperkit : (39.456678523s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-083000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --driver=hyperkit : (6.200087602s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-083000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-083000 status -o json: exit status 2 (149.501585ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-083000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-083000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-083000: (2.389096395s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-083000 --no-kubernetes --driver=hyperkit : (19.971490639s)
--- PASS: TestNoKubernetes/serial/Start (19.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-083000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-083000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (129.599573ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-083000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-083000: (2.368419608s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (75.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-083000 --driver=hyperkit 
E0818 13:11:48.238266    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:12:37.747396    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-083000 --driver=hyperkit : (1m15.780738426s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (75.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-083000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-083000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (129.350783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (255.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E0818 13:13:36.042797    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (4m15.981584342s)
--- PASS: TestNetworkPlugins/group/auto/Start (255.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0818 13:14:51.328601    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m12.101516172s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xlqzv" [5c35c054-32b2-4c26-9de6-22150f887524] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003945933s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p5xqn" [68751c4a-15de-4775-a44e-389ca1e85e3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p5xqn" [68751c4a-15de-4775-a44e-389ca1e85e3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005037488s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0818 13:16:48.237324    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m6.332839999s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jxfcb" [946b3647-c454-4db2-b0d8-552f1b4f8fb0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00388133s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-275jl" [f0cc60b2-f1fa-4f6c-b344-bfb419d484bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-275jl" [f0cc60b2-f1fa-4f6c-b344-bfb419d484bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004632516s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z4p44" [c5de6849-dedc-49ec-be81-7e97bd6f493a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z4p44" [c5de6849-dedc-49ec-be81-7e97bd6f493a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004811988s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0818 13:17:37.747438    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (53.62337791s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (83.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m23.588738511s)
--- PASS: TestNetworkPlugins/group/false/Start (83.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mfrxt" [3c75533f-7be0-494a-b1a9-86097b55d196] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mfrxt" [3c75533f-7be0-494a-b1a9-86097b55d196] Running
E0818 13:18:36.042348    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004600294s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (188.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (3m8.387420484s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (188.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4mzgp" [3258b4c1-01e5-4102-b74f-0ab4800698ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4mzgp" [3258b4c1-01e5-4102-b74f-0ab4800698ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004721798s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0818 13:20:16.281922    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.288131    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.300435    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.322543    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.364915    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.446970    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.608340    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:16.929797    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:17.573349    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:18.855441    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:21.418490    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:20:26.540392    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (51.731372629s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7hd4q" [50b9ba07-e6e8-425d-ba3c-fc3399bcadcc] Running
E0818 13:20:36.783946    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005892385s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tj5qq" [c3be2828-ef09-4c23-a263-43ca7201b31b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tj5qq" [c3be2828-ef09-4c23-a263-43ca7201b31b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00365291s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (166.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0818 13:21:38.230484    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:48.237536    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/addons-103000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.586738    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.593742    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.607310    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.630911    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.672952    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.756257    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:58.918957    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:59.241752    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:21:59.884519    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:22:01.167773    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:22:03.730735    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (2m46.150951163s)
--- PASS: TestNetworkPlugins/group/bridge/Start (166.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ndksf" [9f18eb6b-3825-4965-9b46-d67c81ba282f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0818 13:22:08.852351    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-ndksf" [9f18eb6b-3825-4965-9b46-d67c81ba282f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005595832s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (51.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0818 13:22:37.748482    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/functional-843000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:22:38.444761    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/auto-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:22:39.576504    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:22:58.926220    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/auto-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:00.153238    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/kindnet-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:20.538576    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/calico-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-061000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (51.63532255s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (51.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zv24p" [56bfefd9-ff3e-4aff-9e66-40750389cef2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0818 13:23:28.673068    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.679432    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.691052    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.713008    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.754742    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.836151    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:28.997449    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:29.319459    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:29.960836    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:31.243263    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zv24p" [56bfefd9-ff3e-4aff-9e66-40750389cef2] Running
E0818 13:23:33.804961    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:23:36.042815    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/skaffold-977000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003959029s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-061000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-061000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-061000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cmrqq" [6dd70e1e-b235-4287-a238-c31cbfd30c21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cmrqq" [6dd70e1e-b235-4287-a238-c31cbfd30c21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003669251s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (20.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-061000 exec deployment/netcat -- nslookup kubernetes.default
E0818 13:24:09.631143    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.638868    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.650298    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.650352    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/custom-flannel-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.673155    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.716319    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.798520    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:09.961193    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:10.282925    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:10.925870    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:12.207473    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
E0818 13:24:14.770453    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-061000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135631486s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0818 13:24:19.892703    1526 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19423-1007/.minikube/profiles/false-061000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Run:  kubectl --context bridge-061000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-061000 exec deployment/netcat -- nslookup kubernetes.default: (5.137950145s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (20.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-061000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    

Test skip (19/276)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-061000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-061000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-061000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-061000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-061000"

                                                
                                                
----------------------- debugLogs end: cilium-061000 [took: 5.610854308s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-061000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-061000
--- SKIP: TestNetworkPlugins/group/cilium (5.89s)

                                                
                                    
Copied to clipboard